1 <?xml version='1.0' encoding='UTF-8'?>
2 <chapter xmlns="http://docbook.org/ns/docbook"
3 xmlns:xl="http://www.w3.org/1999/xlink" version="5.0" xml:lang="en-US"
5 <title xml:id="lustreproc.title">Lustre Parameters</title>
6 <para>The <literal>/proc</literal> and <literal>/sys</literal> file systems
7 acts as an interface to internal data structures in the kernel. This chapter
8 describes parameters and tunables that are useful for optimizing and
9 monitoring aspects of a Lustre file system. It includes these sections:</para>
12 <para><xref linkend="dbdoclet.50438271_83523"/></para>
17 <title>Introduction to Lustre Parameters</title>
18 <para>Lustre parameters and statistics files provide an interface to
19 internal data structures in the kernel that enables monitoring and
20 tuning of many aspects of Lustre file system and application performance.
21 These data structures include settings and metrics for components such
22 as memory, networking, file systems, and kernel housekeeping routines,
23 which are available throughout the hierarchical file layout.
25 <para>Typically, metrics are accessed via <literal>lctl get_param</literal>
26 files and settings are changed by via <literal>lctl set_param</literal>.
27 While it is possible to access parameters in <literal>/proc</literal>
28 and <literal>/sys</literal> directly, the location of these parameters may
29 change between releases, so it is recommended to always use
30 <literal>lctl</literal> to access the parameters from userspace scripts.
31 Some data is server-only, some data is client-only, and some data is
32 exported from the client to the server and is thus duplicated in both
35 <para>In the examples in this chapter, <literal>#</literal> indicates
36 a command is entered as root. Lustre servers are named according to the
37 convention <literal><replaceable>fsname</replaceable>-<replaceable>MDT|OSTnumber</replaceable></literal>.
38 The standard UNIX wildcard designation (*) is used.</para>
40 <para>Some examples are shown below:</para>
43 <para> To obtain data from a Lustre client:</para>
44 <screen># lctl list_param osc.*
45 osc.testfs-OST0000-osc-ffff881071d5cc00
46 osc.testfs-OST0001-osc-ffff881071d5cc00
47 osc.testfs-OST0002-osc-ffff881071d5cc00
48 osc.testfs-OST0003-osc-ffff881071d5cc00
49 osc.testfs-OST0004-osc-ffff881071d5cc00
50 osc.testfs-OST0005-osc-ffff881071d5cc00
51 osc.testfs-OST0006-osc-ffff881071d5cc00
52 osc.testfs-OST0007-osc-ffff881071d5cc00
53 osc.testfs-OST0008-osc-ffff881071d5cc00</screen>
54 <para>In this example, information about OST connections available
55 on a client is displayed (indicated by "osc").</para>
60 <para> To see multiple levels of parameters, use multiple
61 wildcards:<screen># lctl list_param osc.*.*
62 osc.testfs-OST0000-osc-ffff881071d5cc00.active
63 osc.testfs-OST0000-osc-ffff881071d5cc00.blocksize
64 osc.testfs-OST0000-osc-ffff881071d5cc00.checksum_type
65 osc.testfs-OST0000-osc-ffff881071d5cc00.checksums
66 osc.testfs-OST0000-osc-ffff881071d5cc00.connect_flags
67 osc.testfs-OST0000-osc-ffff881071d5cc00.contention_seconds
68 osc.testfs-OST0000-osc-ffff881071d5cc00.cur_dirty_bytes
70 osc.testfs-OST0000-osc-ffff881071d5cc00.rpc_stats</screen></para>
75 <para> To view a specific file, use <literal>lctl get_param</literal>:
76 <screen># lctl get_param osc.lustre-OST0000*.rpc_stats</screen></para>
79 <para>For more information about using <literal>lctl</literal>, see <xref
80 xmlns:xlink="http://www.w3.org/1999/xlink" linkend="dbdoclet.50438194_51490"/>.</para>
81 <para>Data can also be viewed using the <literal>cat</literal> command
82 with the full path to the file. The form of the <literal>cat</literal>
83 command is similar to that of the <literal>lctl get_param</literal>
84 command with some differences. Unfortunately, as the Linux kernel has
85 changed over the years, the location of statistics and parameter files
86 has also changed, which means that the Lustre parameter files may be
87 located in either the <literal>/proc</literal> directory, in the
88 <literal>/sys</literal> directory, and/or in the
89 <literal>/sys/kernel/debug</literal> directory, depending on the kernel
90 version and the Lustre version being used. The <literal>lctl</literal>
91 command insulates scripts from these changes and is preferred over direct
92 file access, unless as part of a high-performance monitoring system.
93 In the <literal>cat</literal> command:</para>
96 <para>Replace the dots in the path with slashes.</para>
99 <para>Prepend the path with the appropriate directory component:
100 <screen>/{proc,sys}/{fs,sys}/{lustre,lnet}</screen></para>
103 <para>For example, an <literal>lctl get_param</literal> command may look like
104 this:<screen># lctl get_param osc.*.uuid
105 osc.testfs-OST0000-osc-ffff881071d5cc00.uuid=594db456-0685-bd16-f59b-e72ee90e9819
106 osc.testfs-OST0001-osc-ffff881071d5cc00.uuid=594db456-0685-bd16-f59b-e72ee90e9819
108 <para>The equivalent <literal>cat</literal> command may look like this:
109 <screen># cat /proc/fs/lustre/osc/*/uuid
110 594db456-0685-bd16-f59b-e72ee90e9819
111 594db456-0685-bd16-f59b-e72ee90e9819
114 <screen># cat /sys/fs/lustre/osc/*/uuid
115 594db456-0685-bd16-f59b-e72ee90e9819
116 594db456-0685-bd16-f59b-e72ee90e9819
118 <para>The <literal>llstat</literal> utility can be used to monitor some
119 Lustre file system I/O activity over a specified time period. For more
121 <xref xmlns:xlink="http://www.w3.org/1999/xlink" linkend="dbdoclet.50438219_23232"/></para>
122 <para>Some data is imported from attached clients and is available in a
123 directory called <literal>exports</literal> located in the corresponding
124 per-service directory on a Lustre server. For example:
125 <screen>oss:/root# lctl list_param obdfilter.testfs-OST0000.exports.*
126 # hash ldlm_stats stats uuid</screen></para>
128 <title>Identifying Lustre File Systems and Servers</title>
129 <para>Several parameter files on the MGS list existing
130 Lustre file systems and file system servers. The examples below are for
131 a Lustre file system called
132 <literal>testfs</literal> with one MDT and three OSTs.</para>
135 <para> To view all known Lustre file systems, enter:</para>
136 <screen>mgs# lctl get_param mgs.*.filesystems
140 <para> To view the names of the servers in a file system in which least one server is
142 enter:<screen>lctl get_param mgs.*.live.<replaceable><filesystem name></replaceable></screen></para>
143 <para>For example:</para>
144 <screen>mgs# lctl get_param mgs.*.live.testfs
152 Secure RPC Config Rules:
154 imperative_recovery_state:
158 notify_duration_total: 0.001000
159 notify_duation_max: 0.001000
160 notify_count: 4</screen>
163 <para>To list all configured devices on the local node, enter:</para>
164 <screen># lctl device_list
166 1 UP mgc MGC192.168.10.34@tcp 1f45bb57-d9be-2ddb-c0b0-5431a49226705
167 2 UP mdt MDS MDS_uuid 3
168 3 UP lov testfs-mdtlov testfs-mdtlov_UUID 4
169 4 UP mds testfs-MDT0000 testfs-MDT0000_UUID 7
170 5 UP osc testfs-OST0000-osc testfs-mdtlov_UUID 5
171 6 UP osc testfs-OST0001-osc testfs-mdtlov_UUID 5
172 7 UP lov testfs-clilov-ce63ca00 08ac6584-6c4a-3536-2c6d-b36cf9cbdaa04
173 8 UP mdc testfs-MDT0000-mdc-ce63ca00 08ac6584-6c4a-3536-2c6d-b36cf9cbdaa05
174 9 UP osc testfs-OST0000-osc-ce63ca00 08ac6584-6c4a-3536-2c6d-b36cf9cbdaa05
175 10 UP osc testfs-OST0001-osc-ce63ca00 08ac6584-6c4a-3536-2c6d-b36cf9cbdaa05</screen>
176 <para>The information provided on each line includes:</para>
177 <para> - Device number</para>
178 <para> - Device status (UP, INactive, or STopping) </para>
179 <para> - Device name</para>
180 <para> - Device UUID</para>
181 <para> - Reference count (how many users this device has)</para>
184 <para>To display the name of any server, view the device
185 label:<screen>mds# e2label /dev/sda
186 testfs-MDT0000</screen></para>
192 <title>Tuning Multi-Block Allocation (mballoc)</title>
193 <para>Capabilities supported by <literal>mballoc</literal> include:</para>
196 <para> Pre-allocation for single files to help to reduce fragmentation.</para>
199 <para> Pre-allocation for a group of files to enable packing of small files into large,
200 contiguous chunks.</para>
203 <para> Stream allocation to help decrease the seek rate.</para>
206 <para>The following <literal>mballoc</literal> tunables are available:</para>
207 <informaltable frame="all">
209 <colspec colname="c1" colwidth="30*"/>
210 <colspec colname="c2" colwidth="70*"/>
214 <para><emphasis role="bold">Field</emphasis></para>
217 <para><emphasis role="bold">Description</emphasis></para>
225 <literal>mb_max_to_scan</literal></para>
228 <para>Maximum number of free chunks that <literal>mballoc</literal> finds before a
229 final decision to avoid a livelock situation.</para>
235 <literal>mb_min_to_scan</literal></para>
238 <para>Minimum number of free chunks that <literal>mballoc</literal> searches before
239 picking the best chunk for allocation. This is useful for small requests to reduce
240 fragmentation of big free chunks.</para>
246 <literal>mb_order2_req</literal></para>
249 <para>For requests equal to 2^N, where N >= <literal>mb_order2_req</literal>, a
250 fast search is done using a base 2 buddy allocation service.</para>
256 <literal>mb_small_req</literal></para>
259 <para><literal>mb_small_req</literal> - Defines (in MB) the upper bound of "small
261 <para><literal>mb_large_req</literal> - Defines (in MB) the lower bound of "large
263 <para>Requests are handled differently based on size:<itemizedlist>
265 <para>< <literal>mb_small_req</literal> - Requests are packed together to
266 form large, aggregated requests.</para>
269 <para>> <literal>mb_small_req</literal> and < <literal>mb_large_req</literal>
270 - Requests are primarily allocated linearly.</para>
273 <para>> <literal>mb_large_req</literal> - Requests are allocated since hard disk
274 seek time is less of a concern in this case.</para>
276 </itemizedlist></para>
277 <para>In general, small requests are combined to create larger requests, which are
278 then placed close to one another to minimize the number of seeks required to access
285 <literal>mb_large_req</literal></para>
291 <literal>prealloc_table</literal></para>
294 <para>A table of values used to preallocate space when a new request is received. By
295 default, the table looks like
296 this:<screen>prealloc_table
297 4 8 16 32 64 128 256 512 1024 2048 </screen></para>
298 <para>When a new request is received, space is preallocated at the next higher
299 increment specified in the table. For example, for requests of less than 4 file
300 system blocks, 4 blocks of space are preallocated; for requests between 4 and 8, 8
301 blocks are preallocated; and so forth</para>
302 <para>Although customized values can be entered in the table, the performance of
303 general usage file systems will not typically be improved by modifying the table (in
304 fact, in ext4 systems, the table values are fixed). However, for some specialized
305 workloads, tuning the <literal>prealloc_table</literal> values may result in smarter
306 preallocation decisions. </para>
312 <literal>mb_group_prealloc</literal></para>
315 <para>The amount of space (in kilobytes) preallocated for groups of small
322 <para>Buddy group cache information found in
323 <literal>/sys/fs/ldiskfs/<replaceable>disk_device</replaceable>/mb_groups</literal> may
324 be useful for assessing on-disk fragmentation. For
325 example:<screen>cat /proc/fs/ldiskfs/loop0/mb_groups
326 #group: free free frags first pa [ 2^0 2^1 2^2 2^3 2^4 2^5 2^6 2^7 2^8 2^9
328 #0 : 2936 2936 1 42 0 [ 0 0 0 1 1 1 1 2 0 1
329 2 0 0 0 ]</screen></para>
330 <para>In this example, the columns show:<itemizedlist>
332 <para>#group number</para>
335 <para>Available blocks in the group</para>
338 <para>Blocks free on a disk</para>
341 <para>Number of free fragments</para>
344 <para>First free block in the group</para>
347 <para>Number of preallocated chunks (not blocks)</para>
350 <para>A series of available chunks of different sizes</para>
352 </itemizedlist></para>
355 <title>Monitoring Lustre File System I/O</title>
356 <para>A number of system utilities are provided to enable collection of data related to I/O
357 activity in a Lustre file system. In general, the data collected describes:</para>
360 <para> Data transfer rates and throughput of inputs and outputs external to the Lustre file
361 system, such as network requests or disk I/O operations performed</para>
364 <para> Data about the throughput or transfer rates of internal Lustre file system data, such
365 as locks or allocations. </para>
369 <para>It is highly recommended that you complete baseline testing for your Lustre file system
370 to determine normal I/O activity for your hardware, network, and system workloads. Baseline
371 data will allow you to easily determine when performance becomes degraded in your system.
372 Two particularly useful baseline statistics are:</para>
375 <para><literal>brw_stats</literal> – Histogram data characterizing I/O requests to the
376 OSTs. For more details, see <xref xmlns:xlink="http://www.w3.org/1999/xlink"
377 linkend="dbdoclet.50438271_55057"/>.</para>
380 <para><literal>rpc_stats</literal> – Histogram data showing information about RPCs made by
381 clients. For more details, see <xref xmlns:xlink="http://www.w3.org/1999/xlink"
382 linkend="MonitoringClientRCPStream"/>.</para>
386 <section remap="h3" xml:id="MonitoringClientRCPStream">
388 <primary>proc</primary>
389 <secondary>watching RPC</secondary>
390 </indexterm>Monitoring the Client RPC Stream</title>
391 <para>The <literal>rpc_stats</literal> file contains histogram data showing information about
392 remote procedure calls (RPCs) that have been made since this file was last cleared. The
393 histogram data can be cleared by writing any value into the <literal>rpc_stats</literal>
395 <para><emphasis role="italic"><emphasis role="bold">Example:</emphasis></emphasis></para>
396 <screen># lctl get_param osc.testfs-OST0000-osc-ffff810058d2f800.rpc_stats
397 snapshot_time: 1372786692.389858 (secs.usecs)
398 read RPCs in flight: 0
399 write RPCs in flight: 1
400 dio read RPCs in flight: 0
401 dio write RPCs in flight: 0
402 pending write pages: 256
403 pending read pages: 0
406 pages per rpc rpcs % cum % | rpcs % cum %
415 256: 850 100 100 | 18346 99 100
418 rpcs in flight rpcs % cum % | rpcs % cum %
419 0: 691 81 81 | 1740 9 9
420 1: 48 5 86 | 938 5 14
421 2: 29 3 90 | 1059 5 20
422 3: 17 2 92 | 1052 5 26
423 4: 13 1 93 | 920 5 31
424 5: 12 1 95 | 425 2 33
425 6: 10 1 96 | 389 2 35
426 7: 30 3 100 | 11373 61 97
427 8: 0 0 100 | 460 2 100
430 offset rpcs % cum % | rpcs % cum %
431 0: 850 100 100 | 18347 99 99
439 128: 0 0 100 | 4 0 100
442 <para>The header information includes:</para>
445 <para><literal>snapshot_time</literal> - UNIX epoch instant the file was read.</para>
448 <para><literal>read RPCs in flight</literal> - Number of read RPCs issued by the OSC, but
449 not complete at the time of the snapshot. This value should always be less than or equal
450 to <literal>max_rpcs_in_flight</literal>.</para>
453 <para><literal>write RPCs in flight</literal> - Number of write RPCs issued by the OSC,
454 but not complete at the time of the snapshot. This value should always be less than or
455 equal to <literal>max_rpcs_in_flight</literal>.</para>
458 <para><literal>dio read RPCs in flight</literal> - Direct I/O (as opposed to block I/O)
459 read RPCs issued but not completed at the time of the snapshot.</para>
462 <para><literal>dio write RPCs in flight</literal> - Direct I/O (as opposed to block I/O)
463 write RPCs issued but not completed at the time of the snapshot.</para>
466 <para><literal>pending write pages</literal> - Number of pending write pages that have
467 been queued for I/O in the OSC.</para>
470 <para><literal>pending read pages</literal> - Number of pending read pages that have been
471 queued for I/O in the OSC.</para>
474 <para>The tabular data is described in the table below. Each row in the table shows the number
475 of reads or writes (<literal>ios</literal>) occurring for the statistic, the relative
476 percentage (<literal>%</literal>) of total reads or writes, and the cumulative percentage
477 (<literal>cum %</literal>) to that point in the table for the statistic.</para>
478 <informaltable frame="all">
480 <colspec colname="c1" colwidth="40*"/>
481 <colspec colname="c2" colwidth="60*"/>
485 <para><emphasis role="bold">Field</emphasis></para>
488 <para><emphasis role="bold">Description</emphasis></para>
495 <para> pages per RPC</para>
498 <para>Shows cumulative RPC reads and writes organized according to the number of
499 pages in the RPC. A single page RPC increments the <literal>0:</literal>
505 <para> RPCs in flight</para>
508 <para> Shows the number of RPCs that are pending when an RPC is sent. When the first
509 RPC is sent, the <literal>0:</literal> row is incremented. If the first RPC is
510 sent while another RPC is pending, the <literal>1:</literal> row is incremented
519 <para> The page index of the first page read from or written to the object by the
526 <para><emphasis role="italic"><emphasis role="bold">Analysis:</emphasis></emphasis></para>
527 <para>This table provides a way to visualize the concurrency of the RPC stream. Ideally, you
528 will see a large clump around the <literal>max_rpcs_in_flight value</literal>, which shows
529 that the network is being kept busy.</para>
530 <para>For information about optimizing the client I/O RPC stream, see <xref
531 xmlns:xlink="http://www.w3.org/1999/xlink" linkend="TuningClientIORPCStream"/>.</para>
533 <section xml:id="lustreproc.clientstats" remap="h3">
535 <primary>proc</primary>
536 <secondary>client stats</secondary>
537 </indexterm>Monitoring Client Activity</title>
538 <para>The <literal>stats</literal> file maintains statistics accumulate during typical
539 operation of a client across the VFS interface of the Lustre file system. Only non-zero
540 parameters are displayed in the file. </para>
541 <para>Client statistics are enabled by default.</para>
543 <para>Statistics for all mounted file systems can be discovered by
544 entering:<screen>lctl get_param llite.*.stats</screen></para>
546 <para><emphasis role="italic"><emphasis role="bold">Example:</emphasis></emphasis></para>
547 <screen>client# lctl get_param llite.*.stats
548 snapshot_time 1308343279.169704 secs.usecs
549 dirty_pages_hits 14819716 samples [regs]
550 dirty_pages_misses 81473472 samples [regs]
551 read_bytes 36502963 samples [bytes] 1 26843582 55488794
552 write_bytes 22985001 samples [bytes] 0 125912 3379002
553 brw_read 2279 samples [pages] 1 1 2270
554 ioctl 186749 samples [regs]
555 open 3304805 samples [regs]
556 close 3331323 samples [regs]
557 seek 48222475 samples [regs]
558 fsync 963 samples [regs]
559 truncate 9073 samples [regs]
560 setxattr 19059 samples [regs]
561 getxattr 61169 samples [regs]
563 <para> The statistics can be cleared by echoing an empty string into the
564 <literal>stats</literal> file or by using the command:
565 <screen>lctl set_param llite.*.stats=0</screen></para>
566 <para>The statistics displayed are described in the table below.</para>
567 <informaltable frame="all">
569 <colspec colname="c1" colwidth="3*"/>
570 <colspec colname="c2" colwidth="7*"/>
574 <para><emphasis role="bold">Entry</emphasis></para>
577 <para><emphasis role="bold">Description</emphasis></para>
585 <literal>snapshot_time</literal></para>
588 <para>UNIX epoch instant the stats file was read.</para>
594 <literal>dirty_page_hits</literal></para>
597 <para>The number of write operations that have been satisfied by the dirty page
598 cache. See <xref xmlns:xlink="http://www.w3.org/1999/xlink"
599 linkend="TuningClientIORPCStream"/> for more information about dirty cache
600 behavior in a Lustre file system.</para>
606 <literal>dirty_page_misses</literal></para>
609 <para>The number of write operations that were not satisfied by the dirty page
616 <literal>read_bytes</literal></para>
619 <para>The number of read operations that have occurred. Three additional parameters
620 are displayed:</para>
625 <para>The minimum number of bytes read in a single request since the counter
632 <para>The maximum number of bytes read in a single request since the counter
639 <para>The accumulated sum of bytes of all read requests since the counter was
649 <literal>write_bytes</literal></para>
652 <para>The number of write operations that have occurred. Three additional parameters
653 are displayed:</para>
658 <para>The minimum number of bytes written in a single request since the
659 counter was reset.</para>
665 <para>The maximum number of bytes written in a single request since the
666 counter was reset.</para>
672 <para>The accumulated sum of bytes of all write requests since the counter was
682 <literal>brw_read</literal></para>
685 <para>The number of pages that have been read. Three additional parameters are
691 <para>The minimum number of bytes read in a single block read/write
692 (<literal>brw</literal>) read request since the counter was reset.</para>
698 <para>The maximum number of bytes read in a single <literal>brw</literal> read
699 requests since the counter was reset.</para>
705 <para>The accumulated sum of bytes of all <literal>brw</literal> read requests
706 since the counter was reset.</para>
715 <literal>ioctl</literal></para>
718 <para>The number of combined file and directory <literal>ioctl</literal>
725 <literal>open</literal></para>
728 <para>The number of open operations that have succeeded.</para>
734 <literal>close</literal></para>
737 <para>The number of close operations that have succeeded.</para>
743 <literal>seek</literal></para>
746 <para>The number of times <literal>seek</literal> has been called.</para>
752 <literal>fsync</literal></para>
755 <para>The number of times <literal>fsync</literal> has been called.</para>
761 <literal>truncate</literal></para>
764 <para>The total number of calls to both locked and lockless
765 <literal>truncate</literal>.</para>
771 <literal>setxattr</literal></para>
774 <para>The number of times extended attributes have been set. </para>
780 <literal>getxattr</literal></para>
783 <para>The number of times value(s) of extended attributes have been fetched.</para>
789 <para><emphasis role="italic"><emphasis role="bold">Analysis:</emphasis></emphasis></para>
790 <para>Information is provided about the amount and type of I/O activity is taking place on the
795 <primary>proc</primary>
796 <secondary>read/write survey</secondary>
797 </indexterm>Monitoring Client Read-Write Offset Statistics</title>
798 <para>When the <literal>offset_stats</literal> parameter is set, statistics are maintained for
799 occurrences of a series of read or write calls from a process that did not access the next
800 sequential location. The <literal>OFFSET</literal> field is reset to 0 (zero) whenever a
801 different file is read or written.</para>
803 <para>By default, statistics are not collected in the <literal>offset_stats</literal>,
804 <literal>extents_stats</literal>, and <literal>extents_stats_per_process</literal> files
805 to reduce monitoring overhead when this information is not needed. The collection of
806 statistics in all three of these files is activated by writing
807 anything, except for 0 (zero) and "disable", into any one of the
810 <para><emphasis role="italic"><emphasis role="bold">Example:</emphasis></emphasis></para>
811 <screen># lctl get_param llite.testfs-f57dee0.offset_stats
812 snapshot_time: 1155748884.591028 (secs.usecs)
813 RANGE RANGE SMALLEST LARGEST
814 R/W PID START END EXTENT EXTENT OFFSET
815 R 8385 0 128 128 128 0
816 R 8385 0 224 224 224 -128
817 W 8385 0 250 50 100 0
818 W 8385 100 1110 10 500 -150
819 W 8384 0 5233 5233 5233 0
820 R 8385 500 600 100 100 -610</screen>
821 <para>In this example, <literal>snapshot_time</literal> is the UNIX epoch instant the file was
822 read. The tabular data is described in the table below.</para>
823 <para>The <literal>offset_stats</literal> file can be cleared by
824 entering:<screen>lctl set_param llite.*.offset_stats=0</screen></para>
825 <informaltable frame="all">
827 <colspec colname="c1" colwidth="50*"/>
828 <colspec colname="c2" colwidth="50*"/>
832 <para><emphasis role="bold">Field</emphasis></para>
835 <para><emphasis role="bold">Description</emphasis></para>
845 <para>Indicates if the non-sequential call was a read or write</para>
853 <para>Process ID of the process that made the read/write call.</para>
858 <para>RANGE START/RANGE END</para>
861 <para>Range in which the read/write calls were sequential.</para>
866 <para>SMALLEST EXTENT </para>
869 <para>Smallest single read/write in the corresponding range (in bytes).</para>
874 <para>LARGEST EXTENT </para>
877 <para>Largest single read/write in the corresponding range (in bytes).</para>
885 <para>Difference between the previous range end and the current range start.</para>
891 <para><emphasis role="italic"><emphasis role="bold">Analysis:</emphasis></emphasis></para>
892 <para>This data provides an indication of how contiguous or fragmented the data is. For
893 example, the fourth entry in the example above shows the writes for this RPC were sequential
894 in the range 100 to 1110 with the minimum write 10 bytes and the maximum write 500 bytes.
895 The range started with an offset of -150 from the <literal>RANGE END</literal> of the
896 previous entry in the example.</para>
900 <primary>proc</primary>
901 <secondary>read/write survey</secondary>
902 </indexterm>Monitoring Client Read-Write Extent Statistics</title>
903 <para>For in-depth troubleshooting, client read-write extent statistics can be accessed to
904 obtain more detail about read/write I/O extents for the file system or for a particular
907 <para>By default, statistics are not collected in the <literal>offset_stats</literal>,
908 <literal>extents_stats</literal>, and <literal>extents_stats_per_process</literal> files
909 to reduce monitoring overhead when this information is not needed. The collection of
910 statistics in all three of these files is activated by writing
911 anything, except for 0 (zero) and "disable", into any one of the
915 <title>Client-Based I/O Extent Size Survey</title>
916 <para>The <literal>extents_stats</literal> histogram in the
917 <literal>llite</literal> directory shows the statistics for the sizes
918 of the read/write I/O extents. This file does not maintain the per
919 process statistics.</para>
920 <para><emphasis role="italic"><emphasis role="bold">Example:</emphasis></emphasis></para>
921 <screen># lctl get_param llite.testfs-*.extents_stats
922 snapshot_time: 1213828728.348516 (secs.usecs)
924 extents calls % cum% | calls % cum%
926 0K - 4K : 0 0 0 | 2 2 2
927 4K - 8K : 0 0 0 | 0 0 2
928 8K - 16K : 0 0 0 | 0 0 2
929 16K - 32K : 0 0 0 | 20 23 26
930 32K - 64K : 0 0 0 | 0 0 26
931 64K - 128K : 0 0 0 | 51 60 86
932 128K - 256K : 0 0 0 | 0 0 86
933 256K - 512K : 0 0 0 | 0 0 86
934 512K - 1024K : 0 0 0 | 0 0 86
935 1M - 2M : 0 0 0 | 11 13 100</screen>
936 <para>In this example, <literal>snapshot_time</literal> is the UNIX epoch instant the file
937 was read. The table shows cumulative extents organized according to size with statistics
938 provided separately for reads and writes. Each row in the table shows the number of RPCs
939 for reads and writes respectively (<literal>calls</literal>), the relative percentage of
940 total calls (<literal>%</literal>), and the cumulative percentage to
941 that point in the table of calls (<literal>cum %</literal>). </para>
942 <para> The file can be cleared by issuing the following command:
943 <screen># lctl set_param llite.testfs-*.extents_stats=1</screen></para>
946 <title>Per-Process Client I/O Statistics</title>
947 <para>The <literal>extents_stats_per_process</literal> file maintains the I/O extent size
948 statistics on a per-process basis.</para>
949 <para><emphasis role="italic"><emphasis role="bold">Example:</emphasis></emphasis></para>
950 <screen># lctl get_param llite.testfs-*.extents_stats_per_process
951 snapshot_time: 1213828762.204440 (secs.usecs)
953 extents calls % cum% | calls % cum%
956 0K - 4K : 0 0 0 | 0 0 0
957 4K - 8K : 0 0 0 | 0 0 0
958 8K - 16K : 0 0 0 | 0 0 0
959 16K - 32K : 0 0 0 | 0 0 0
960 32K - 64K : 0 0 0 | 0 0 0
961 64K - 128K : 0 0 0 | 0 0 0
962 128K - 256K : 0 0 0 | 0 0 0
963 256K - 512K : 0 0 0 | 0 0 0
964 512K - 1024K : 0 0 0 | 0 0 0
965 1M - 2M : 0 0 0 | 10 100 100
968 0K - 4K : 0 0 0 | 0 0 0
969 4K - 8K : 0 0 0 | 0 0 0
970 8K - 16K : 0 0 0 | 0 0 0
971 16K - 32K : 0 0 0 | 20 100 100
974 0K - 4K : 0 0 0 | 0 0 0
975 4K - 8K : 0 0 0 | 0 0 0
976 8K - 16K : 0 0 0 | 0 0 0
977 16K - 32K : 0 0 0 | 0 0 0
978 32K - 64K : 0 0 0 | 0 0 0
979 64K - 128K : 0 0 0 | 16 100 100
982 0K - 4K : 0 0 0 | 1 100 100
985 0K - 4K : 0 0 0 | 1 100 100
988 <para>This table shows cumulative extents organized according to size for each process ID
989 (PID) with statistics provided separately for reads and writes. Each row in the table
990 shows the number of RPCs for reads and writes respectively (<literal>calls</literal>), the
991 relative percentage of total calls (<literal>%</literal>), and the cumulative percentage
992 to that point in the table of calls (<literal>cum %</literal>). </para>
995 <section xml:id="dbdoclet.50438271_55057">
997 <primary>proc</primary>
998 <secondary>block I/O</secondary>
999 </indexterm>Monitoring the OST Block I/O Stream</title>
1000 <para>The <literal>brw_stats</literal> file in the <literal>obdfilter</literal> directory
1001 contains histogram data showing statistics for number of I/O requests sent to the disk,
1002 their size, and whether they are contiguous on the disk or not.</para>
1003 <para><emphasis role="italic"><emphasis role="bold">Example:</emphasis></emphasis></para>
1004 <para>Enter on the OSS:</para>
1005 <screen># lctl get_param obdfilter.testfs-OST0000.brw_stats
1006 snapshot_time: 1372775039.769045 (secs.usecs)
1008 pages per bulk r/w rpcs % cum % | rpcs % cum %
1009 1: 108 100 100 | 39 0 0
1014 32: 0 0 100 | 17 0 0
1015 64: 0 0 100 | 12 0 0
1016 128: 0 0 100 | 24 0 0
1017 256: 0 0 100 | 23142 99 100
1020 discontiguous pages rpcs % cum % | rpcs % cum %
1021 0: 108 100 100 | 23245 100 100
1024 discontiguous blocks rpcs % cum % | rpcs % cum %
1025 0: 108 100 100 | 23243 99 99
1026 1: 0 0 100 | 2 0 100
1029 disk fragmented I/Os ios % cum % | ios % cum %
1031 1: 14 12 100 | 23243 99 99
1032 2: 0 0 100 | 2 0 100
1035 disk I/Os in flight ios % cum % | ios % cum %
1036 1: 14 100 100 | 20896 89 89
1037 2: 0 0 100 | 1071 4 94
1038 3: 0 0 100 | 573 2 96
1039 4: 0 0 100 | 300 1 98
1040 5: 0 0 100 | 166 0 98
1041 6: 0 0 100 | 108 0 99
1042 7: 0 0 100 | 81 0 99
1043 8: 0 0 100 | 47 0 99
1044 9: 0 0 100 | 5 0 100
1047 I/O time (1/1000s) ios % cum % | ios % cum %
1050 4: 14 12 100 | 27 0 0
1052 16: 0 0 100 | 31 0 0
1053 32: 0 0 100 | 38 0 0
1054 64: 0 0 100 | 18979 81 82
1055 128: 0 0 100 | 943 4 86
1056 256: 0 0 100 | 1233 5 91
1057 512: 0 0 100 | 1825 7 99
1058 1K: 0 0 100 | 99 0 99
1059 2K: 0 0 100 | 0 0 99
1060 4K: 0 0 100 | 0 0 99
1061 8K: 0 0 100 | 49 0 100
1064 disk I/O size ios % cum % | ios % cum %
1065 4K: 14 100 100 | 41 0 0
1067 16K: 0 0 100 | 1 0 0
1068 32K: 0 0 100 | 0 0 0
1069 64K: 0 0 100 | 4 0 0
1070 128K: 0 0 100 | 17 0 0
1071 256K: 0 0 100 | 12 0 0
1072 512K: 0 0 100 | 24 0 0
1073 1M: 0 0 100 | 23142 99 100
1075 <para>The tabular data is described in the table below. Each row in the table shows the number
1076 of reads and writes occurring for the statistic (<literal>ios</literal>), the relative
1077 percentage of total reads or writes (<literal>%</literal>), and the cumulative percentage to
1078 that point in the table for the statistic (<literal>cum %</literal>). </para>
1079 <informaltable frame="all">
1081 <colspec colname="c1" colwidth="40*"/>
1082 <colspec colname="c2" colwidth="60*"/>
1086 <para><emphasis role="bold">Field</emphasis></para>
1089 <para><emphasis role="bold">Description</emphasis></para>
1097 <literal>pages per bulk r/w</literal></para>
1100 <para>Number of pages per RPC request, which should match aggregate client
1101 <literal>rpc_stats</literal> (see <xref
1102 xmlns:xlink="http://www.w3.org/1999/xlink" linkend="MonitoringClientRCPStream"
1109 <literal>discontiguous pages</literal></para>
1112 <para>Number of discontinuities in the logical file offset of each page in a single
1119 <literal>discontiguous blocks</literal></para>
1122 <para>Number of discontinuities in the physical block allocation in the file system
1123 for a single RPC.</para>
1128 <para><literal>disk fragmented I/Os</literal></para>
1131 <para>Number of I/Os that were not written entirely sequentially.</para>
1136 <para><literal>disk I/Os in flight</literal></para>
1139 <para>Number of disk I/Os currently pending.</para>
1144 <para><literal>I/O time (1/1000s)</literal></para>
1147 <para>Amount of time for each I/O operation to complete.</para>
1152 <para><literal>disk I/O size</literal></para>
1155 <para>Size of each I/O operation.</para>
1161 <para><emphasis role="italic"><emphasis role="bold">Analysis:</emphasis></emphasis></para>
1162 <para>This data provides an indication of extent size and distribution in the file
1167 <title>Tuning Lustre File System I/O</title>
1168 <para>Each OSC has its own tree of tunables. For example:</para>
1169 <screen>$ lctl lctl list_param osc.*.*
1170 osc.myth-OST0000-osc-ffff8804296c2800.active
1171 osc.myth-OST0000-osc-ffff8804296c2800.blocksize
1172 osc.myth-OST0000-osc-ffff8804296c2800.checksum_dump
1173 osc.myth-OST0000-osc-ffff8804296c2800.checksum_type
1174 osc.myth-OST0000-osc-ffff8804296c2800.checksums
1175 osc.myth-OST0000-osc-ffff8804296c2800.connect_flags
1178 osc.myth-OST0000-osc-ffff8804296c2800.state
1179 osc.myth-OST0000-osc-ffff8804296c2800.stats
1180 osc.myth-OST0000-osc-ffff8804296c2800.timeouts
1181 osc.myth-OST0000-osc-ffff8804296c2800.unstable_stats
1182 osc.myth-OST0000-osc-ffff8804296c2800.uuid
1183 osc.myth-OST0001-osc-ffff8804296c2800.active
1184 osc.myth-OST0001-osc-ffff8804296c2800.blocksize
1185 osc.myth-OST0001-osc-ffff8804296c2800.checksum_dump
1186 osc.myth-OST0001-osc-ffff8804296c2800.checksum_type
1190 <para>The following sections describe some of the parameters that can
1191 be tuned in a Lustre file system.</para>
1192 <section remap="h3" xml:id="TuningClientIORPCStream">
1194 <primary>proc</primary>
1195 <secondary>RPC tunables</secondary>
1196 </indexterm>Tuning the Client I/O RPC Stream</title>
1197 <para>Ideally, an optimal amount of data is packed into each I/O RPC
1198 and a consistent number of issued RPCs are in progress at any time.
1199 To help optimize the client I/O RPC stream, several tuning variables
1200 are provided to adjust behavior according to network conditions and
1201 cluster size. For information about monitoring the client I/O RPC
1203 xmlns:xlink="http://www.w3.org/1999/xlink" linkend="MonitoringClientRCPStream"/>.</para>
1204 <para>RPC stream tunables include:</para>
1208 <para><literal>osc.<replaceable>osc_instance</replaceable>.checksums</literal>
1209 - Controls whether the client will calculate data integrity
1210 checksums for the bulk data transferred to the OST. Data
1211 integrity checksums are enabled by default. The algorithm used
1212 can be set using the <literal>checksum_type</literal> parameter.
1216 <para><literal>osc.<replaceable>osc_instance</replaceable>.checksum_type</literal>
1217 - Controls the data integrity checksum algorithm used by the
1218 client. The available algorithms are determined by the set of
1219 algorihtms. The checksum algorithm used by default is determined
1220 by first selecting the fastest algorithms available on the OST,
1221 and then selecting the fastest of those algorithms on the client,
1222 which depends on available optimizations in the CPU hardware and
1223 kernel. The default algorithm can be overridden by writing the
1224 algorithm name into the <literal>checksum_type</literal>
1225 parameter. Available checksum types can be seen on the client by
1226 reading the <literal>checksum_type</literal> parameter. Currently
1227 supported checksum types are:
1228 <literal>adler</literal>,
1229 <literal>crc32</literal>,
1230 <literal>crc32c</literal>
1232 <para condition="l2C">
1233 In Lustre release 2.12 additional checksum types were added to
1234 allow end-to-end checksum integration with T10-PI capable
1235 hardware. The client will compute the appropriate checksum
1236 type, based on the checksum type used by the storage, for the
1237 RPC checksum, which will be verified by the server and passed
1238 on to the storage. The T10-PI checksum types are:
1239 <literal>t10ip512</literal>,
1240 <literal>t10ip4K</literal>,
1241 <literal>t10crc512</literal>,
1242 <literal>t10crc4K</literal>
1246 <para><literal>osc.<replaceable>osc_instance</replaceable>.max_dirty_mb</literal>
1247 - Controls how many MiB of dirty data can be written into the
1248 client pagecache for writes by <emphasis>each</emphasis> OSC.
1249 When this limit is reached, additional writes block until
1250 previously-cached data is written to the server. This may be
1251 changed by the <literal>lctl set_param</literal> command. Only
1252 values larger than 0 and smaller than the lesser of 2048 MiB or
1253 1/4 of client RAM are valid. Performance can suffers if the
1254 client cannot aggregate enough data per OSC to form a full RPC
1255 (as set by the <literal>max_pages_per_rpc</literal>) parameter,
1256 unless the application is doing very large writes itself.
1258 <para>To maximize performance, the value for
1259 <literal>max_dirty_mb</literal> is recommended to be at least
1260 4 * <literal>max_pages_per_rpc</literal> *
1261 <literal>max_rpcs_in_flight</literal>.
1265 <para><literal>osc.<replaceable>osc_instance</replaceable>.cur_dirty_bytes</literal>
1266 - A read-only value that returns the current number of bytes
1267 written and cached by this OSC.
1271 <para><literal>osc.<replaceable>osc_instance</replaceable>.max_pages_per_rpc</literal>
1272 - The maximum number of pages that will be sent in a single RPC
1273 request to the OST. The minimum value is one page and the maximum
1274 value is 16 MiB (4096 on systems with <literal>PAGE_SIZE</literal>
1275 of 4 KiB), with the default value of 4 MiB in one RPC. The upper
1276 limit may also be constrained by <literal>ofd.*.brw_size</literal>
1277 setting on the OSS, and applies to all clients connected to that
1278 OST. It is also possible to specify a units suffix (e.g.
1279 <literal>max_pages_per_rpc=4M</literal>), so the RPC size can be
1280 set independently of the client <literal>PAGE_SIZE</literal>.
1284 <para><literal>osc.<replaceable>osc_instance</replaceable>.max_rpcs_in_flight</literal>
1285 - The maximum number of concurrent RPCs in flight from an OSC to
1286 its OST. If the OSC tries to initiate an RPC but finds that it
1287 already has the same number of RPCs outstanding, it will wait to
1288 issue further RPCs until some complete. The minimum setting is 1
1289 and maximum setting is 256. The default value is 8 RPCs.
1291 <para>To improve small file I/O performance, increase the
1292 <literal>max_rpcs_in_flight</literal> value.
1296 <para><literal>llite.<replaceable>fsname_instance</replaceable>.max_cache_mb</literal>
1297 - Maximum amount of read+write data cached by the client. The
1298 default value is 3/4 of the client RAM.
1304 <para>The value for <literal><replaceable>osc_instance</replaceable></literal>
1305 and <literal><replaceable>fsname_instance</replaceable></literal>
1306 are unique to each mount point to allow associating osc, mdc, lov,
1307 lmv, and llite parameters with the same mount point. However, it is
1308 common for scripts to use a wildcard <literal>*</literal> or a
1309 filesystem-specific wildcard
1310 <literal><replaceable>fsname-*</replaceable></literal> to specify
1311 the parameter settings uniformly on all clients. For example:
1313 client$ lctl get_param osc.testfs-OST0000*.rpc_stats
1314 osc.testfs-OST0000-osc-ffff88107412f400.rpc_stats=
1315 snapshot_time: 1375743284.337839 (secs.usecs)
1316 read RPCs in flight: 0
1317 write RPCs in flight: 0
1321 <section remap="h3" xml:id="TuningClientReadahead">
1323 <primary>proc</primary>
1324 <secondary>readahead</secondary>
1325 </indexterm>Tuning File Readahead and Directory Statahead</title>
1326 <para>File readahead and directory statahead enable reading of data
1327 into memory before a process requests the data. File readahead prefetches
1328 file content data into memory for <literal>read()</literal> related
1329 calls, while directory statahead fetches file metadata into memory for
1330 <literal>readdir()</literal> and <literal>stat()</literal> related
1331 calls. When readahead and statahead work well, a process that accesses
1332 data finds that the information it needs is available immediately in
1333 memory on the client when requested without the delay of network I/O.
1335 <section remap="h4">
1336 <title>Tuning File Readahead</title>
1337 <para>File readahead is triggered when two or more sequential reads
1338 by an application fail to be satisfied by data in the Linux buffer
1339 cache. The size of the initial readahead is determined by the RPC
1340 size and the file stripe size, but will typically be at least 1 MiB.
1341 Additional readaheads grow linearly and increment until the per-file
1342 or per-system readahead cache limit on the client is reached.</para>
1343 <para>Readahead tunables include:</para>
1346 <para><literal>llite.<replaceable>fsname_instance</replaceable>.max_read_ahead_mb</literal>
1347 - Controls the maximum amount of data readahead on a file.
1348 Files are read ahead in RPC-sized chunks (4 MiB, or the size of
1349 the <literal>read()</literal> call, if larger) after the second
1350 sequential read on a file descriptor. Random reads are done at
1351 the size of the <literal>read()</literal> call only (no
1352 readahead). Reads to non-contiguous regions of the file reset
1353 the readahead algorithm, and readahead is not triggered until
1354 sequential reads take place again.
1357 This is the global limit for all files and cannot be larger than
1358 1/2 of the client RAM. To disable readahead, set
1359 <literal>max_read_ahead_mb=0</literal>.
1363 <para><literal>llite.<replaceable>fsname_instance</replaceable>.max_read_ahead_per_file_mb</literal>
1364 - Controls the maximum number of megabytes (MiB) of data that
1365 should be prefetched by the client when sequential reads are
1366 detected on a file. This is the per-file readahead limit and
1367 cannot be larger than <literal>max_read_ahead_mb</literal>.
1371 <para><literal>llite.<replaceable>fsname_instance</replaceable>.max_read_ahead_whole_mb</literal>
1372 - Controls the maximum size of a file in MiB that is read in its
1373 entirety upon access, regardless of the size of the
1374 <literal>read()</literal> call. This avoids multiple small read
1375 RPCs on relatively small files, when it is not possible to
1376 efficiently detect a sequential read pattern before the whole
1379 <para>The default value is the greater of 2 MiB or the size of one
1380 RPC, as given by <literal>max_pages_per_rpc</literal>.
1386 <title>Tuning Directory Statahead and AGL</title>
1387 <para>Many system commands, such as <literal>ls –l</literal>,
1388 <literal>du</literal>, and <literal>find</literal>, traverse a
1389 directory sequentially. To make these commands run efficiently, the
1390 directory statahead can be enabled to improve the performance of
1391 directory traversal.</para>
1392 <para>The statahead tunables are:</para>
1395 <para><literal>statahead_max</literal> -
1396 Controls the maximum number of file attributes that will be
1397 prefetched by the statahead thread. By default, statahead is
1398 enabled and <literal>statahead_max</literal> is 32 files.</para>
1399 <para>To disable statahead, set <literal>statahead_max</literal>
1400 to zero via the following command on the client:</para>
1401 <screen>lctl set_param llite.*.statahead_max=0</screen>
1402 <para>To change the maximum statahead window size on a client:</para>
1403 <screen>lctl set_param llite.*.statahead_max=<replaceable>n</replaceable></screen>
1404 <para>The maximum <literal>statahead_max</literal> is 8192 files.
1406 <para>The directory statahead thread will also prefetch the file
1407 size/block attributes from the OSTs, so that all file attributes
1408 are available on the client when requested by an application.
1409 This is controlled by the asynchronous glimpse lock (AGL) setting.
1410 The AGL behaviour can be disabled by setting:</para>
1411 <screen>lctl set_param llite.*.statahead_agl=0</screen>
1414 <para><literal>statahead_stats</literal> -
1415 A read-only interface that provides current statahead and AGL
1416 statistics, such as how many times statahead/AGL has been triggered
1417 since the last mount, how many statahead/AGL failures have occurred
1418 due to an incorrect prediction or other causes.</para>
1420 <para>AGL behaviour is affected by statahead since the inodes
1421 processed by AGL are built by the statahead thread. If
1422 statahead is disabled, then AGL is also disabled.</para>
1428 <section remap="h3">
1430 <primary>proc</primary>
1431 <secondary>read cache</secondary>
1432 </indexterm>Tuning OSS Read Cache</title>
1433 <para>The OSS read cache feature provides read-only caching of data on an OSS. This
1434 functionality uses the Linux page cache to store the data and uses as much physical memory
1435 as is allocated.</para>
1436 <para>OSS read cache improves Lustre file system performance in these situations:</para>
1439 <para>Many clients are accessing the same data set (as in HPC applications or when
1440 diskless clients boot from the Lustre file system).</para>
1443 <para>One client is storing data while another client is reading it (i.e., clients are
1444 exchanging data via the OST).</para>
1447 <para>A client has very limited caching of its own.</para>
1450 <para>OSS read cache offers these benefits:</para>
1453 <para>Allows OSTs to cache read data more frequently.</para>
1456 <para>Improves repeated reads to match network speeds instead of disk speeds.</para>
1459 <para>Provides the building blocks for OST write cache (small-write aggregation).</para>
1462 <section remap="h4">
1463 <title>Using OSS Read Cache</title>
1464 <para>OSS read cache is implemented on the OSS, and does not require any special support on
1465 the client side. Since OSS read cache uses the memory available in the Linux page cache,
1466 the appropriate amount of memory for the cache should be determined based on I/O patterns;
1467 if the data is mostly reads, then more cache is required than would be needed for mostly
1469 <para>OSS read cache is managed using the following tunables:</para>
1472 <para><literal>read_cache_enable</literal> - Controls whether data read from disk during
1473 a read request is kept in memory and available for later read requests for the same
1474 data, without having to re-read it from disk. By default, read cache is enabled
1475 (<literal>read_cache_enable=1</literal>).</para>
1476 <para>When the OSS receives a read request from a client, it reads data from disk into
1477 its memory and sends the data as a reply to the request. If read cache is enabled,
1478 this data stays in memory after the request from the client has been fulfilled. When
1479 subsequent read requests for the same data are received, the OSS skips reading data
1480 from disk and the request is fulfilled from the cached data. The read cache is managed
1481 by the Linux kernel globally across all OSTs on that OSS so that the least recently
1482 used cache pages are dropped from memory when the amount of free memory is running
1484 <para>If read cache is disabled (<literal>read_cache_enable=0</literal>), the OSS
1485 discards the data after a read request from the client is serviced and, for subsequent
1486 read requests, the OSS again reads the data from disk.</para>
1487 <para>To disable read cache on all the OSTs of an OSS, run:</para>
1488 <screen>root@oss1# lctl set_param obdfilter.*.read_cache_enable=0</screen>
1489 <para>To re-enable read cache on one OST, run:</para>
1490 <screen>root@oss1# lctl set_param obdfilter.{OST_name}.read_cache_enable=1</screen>
1491 <para>To check if read cache is enabled on all OSTs on an OSS, run:</para>
1492 <screen>root@oss1# lctl get_param obdfilter.*.read_cache_enable</screen>
1495 <para><literal>writethrough_cache_enable</literal> - Controls whether data sent to the
1496 OSS as a write request is kept in the read cache and available for later reads, or if
1497 it is discarded from cache when the write is completed. By default, the writethrough
1498 cache is enabled (<literal>writethrough_cache_enable=1</literal>).</para>
1499 <para>When the OSS receives write requests from a client, it receives data from the
1500 client into its memory and writes the data to disk. If the writethrough cache is
1501 enabled, this data stays in memory after the write request is completed, allowing the
1502 OSS to skip reading this data from disk if a later read request, or partial-page write
1503 request, for the same data is received.</para>
1504 <para>If the writethrough cache is disabled
1505 (<literal>writethrough_cache_enabled=0</literal>), the OSS discards the data after
1506 the write request from the client is completed. For subsequent read requests, or
1507 partial-page write requests, the OSS must re-read the data from disk.</para>
1508 <para>Enabling writethrough cache is advisable if clients are doing small or unaligned
1509 writes that would cause partial-page updates, or if the files written by one node are
1510 immediately being accessed by other nodes. Some examples where enabling writethrough
1511 cache might be useful include producer-consumer I/O models or shared-file writes with
1512 a different node doing I/O not aligned on 4096-byte boundaries. </para>
1513 <para>Disabling the writethrough cache is advisable when files are mostly written to the
1514 file system but are not re-read within a short time period, or files are only written
1515 and re-read by the same node, regardless of whether the I/O is aligned or not.</para>
1516 <para>To disable the writethrough cache on all OSTs of an OSS, run:</para>
1517 <screen>root@oss1# lctl set_param obdfilter.*.writethrough_cache_enable=0</screen>
1518 <para>To re-enable the writethrough cache on one OST, run:</para>
1519 <screen>root@oss1# lctl set_param obdfilter.{OST_name}.writethrough_cache_enable=1</screen>
1520 <para>To check if the writethrough cache is enabled, run:</para>
1521 <screen>root@oss1# lctl get_param obdfilter.*.writethrough_cache_enable</screen>
1524 <para><literal>readcache_max_filesize</literal> - Controls the maximum size of a file
1525 that both the read cache and writethrough cache will try to keep in memory. Files
1526 larger than <literal>readcache_max_filesize</literal> will not be kept in cache for
1527 either reads or writes.</para>
1528 <para>Setting this tunable can be useful for workloads where relatively small files are
1529 repeatedly accessed by many clients, such as job startup files, executables, log
1530 files, etc., but large files are read or written only once. By not putting the larger
1531 files into the cache, it is much more likely that more of the smaller files will
1532 remain in cache for a longer time.</para>
1533 <para>When setting <literal>readcache_max_filesize</literal>, the input value can be
1534 specified in bytes, or can have a suffix to indicate other binary units such as
1535 <literal>K</literal> (kilobytes), <literal>M</literal> (megabytes),
1536 <literal>G</literal> (gigabytes), <literal>T</literal> (terabytes), or
1537 <literal>P</literal> (petabytes).</para>
1538 <para>To limit the maximum cached file size to 32 MB on all OSTs of an OSS, run:</para>
1539 <screen>root@oss1# lctl set_param obdfilter.*.readcache_max_filesize=32M</screen>
1540 <para>To disable the maximum cached file size on an OST, run:</para>
1541 <screen>root@oss1# lctl set_param obdfilter.{OST_name}.readcache_max_filesize=-1</screen>
1542 <para>To check the current maximum cached file size on all OSTs of an OSS, run:</para>
1543 <screen>root@oss1# lctl get_param obdfilter.*.readcache_max_filesize</screen>
1550 <primary>proc</primary>
1551 <secondary>OSS journal</secondary>
1552 </indexterm>Enabling OSS Asynchronous Journal Commit</title>
1553 <para>The OSS asynchronous journal commit feature asynchronously writes data to disk without
1554 forcing a journal flush. This reduces the number of seeks and significantly improves
1555 performance on some hardware.</para>
1557 <para>Asynchronous journal commit cannot work with direct I/O-originated writes
1558 (<literal>O_DIRECT</literal> flag set). In this case, a journal flush is forced. </para>
1560 <para>When the asynchronous journal commit feature is enabled, client nodes keep data in the
1561 page cache (a page reference). Lustre clients monitor the last committed transaction number
1562 (<literal>transno</literal>) in messages sent from the OSS to the clients. When a client
1563 sees that the last committed <literal>transno</literal> reported by the OSS is at least
1564 equal to the bulk write <literal>transno</literal>, it releases the reference on the
1565 corresponding pages. To avoid page references being held for too long on clients after a
1566 bulk write, a 7 second ping request is scheduled (the default OSS file system commit time
1567 interval is 5 seconds) after the bulk write reply is received, so the OSS has an opportunity
1568 to report the last committed <literal>transno</literal>.</para>
1569 <para>If the OSS crashes before the journal commit occurs, then intermediate data is lost.
1570 However, OSS recovery functionality incorporated into the asynchronous journal commit
1571 feature causes clients to replay their write requests and compensate for the missing disk
1572 updates by restoring the state of the file system.</para>
1573 <para>By default, <literal>sync_journal</literal> is enabled
1574 (<literal>sync_journal=1</literal>), so that journal entries are committed synchronously.
1575 To enable asynchronous journal commit, set the <literal>sync_journal</literal> parameter to
1576 <literal>0</literal> by entering: </para>
1577 <screen>$ lctl set_param obdfilter.*.sync_journal=0
1578 obdfilter.lol-OST0001.sync_journal=0</screen>
1579 <para>An associated <literal>sync-on-lock-cancel</literal> feature (enabled by default)
1580 addresses a data consistency issue that can result if an OSS crashes after multiple clients
1581 have written data into intersecting regions of an object, and then one of the clients also
1582 crashes. A condition is created in which the POSIX requirement for continuous writes is
1583 violated along with a potential for corrupted data. With
1584 <literal>sync-on-lock-cancel</literal> enabled, if a cancelled lock has any volatile
1585 writes attached to it, the OSS synchronously writes the journal to disk on lock
1586 cancellation. Disabling the <literal>sync-on-lock-cancel</literal> feature may enhance
1587 performance for concurrent write workloads, but it is recommended that you not disable this
1589 <para> The <literal>sync_on_lock_cancel</literal> parameter can be set to the following
1593 <para><literal>always</literal> - Always force a journal flush on lock cancellation
1594 (default when <literal>async_journal</literal> is enabled).</para>
1597 <para><literal>blocking</literal> - Force a journal flush only when the local cancellation
1598 is due to a blocking callback.</para>
1601 <para><literal>never</literal> - Do not force any journal flush (default when
1602 <literal>async_journal</literal> is disabled).</para>
1605 <para>For example, to set <literal>sync_on_lock_cancel</literal> to not to force a journal
1606 flush, use a command similar to:</para>
1607 <screen>$ lctl get_param obdfilter.*.sync_on_lock_cancel
1608 obdfilter.lol-OST0001.sync_on_lock_cancel=never</screen>
1610 <section xml:id="dbdoclet.TuningModRPCs" condition='l28'>
1613 <primary>proc</primary>
1614 <secondary>client metadata performance</secondary>
1616 Tuning the Client Metadata RPC Stream
1618 <para>The client metadata RPC stream represents the metadata RPCs issued
1619 in parallel by a client to a MDT target. The metadata RPCs can be split
1620 in two categories: the requests that do not modify the file system
1621 (like getattr operation), and the requests that do modify the file system
1622 (like create, unlink, setattr operations). To help optimize the client
1623 metadata RPC stream, several tuning variables are provided to adjust
1624 behavior according to network conditions and cluster size.</para>
1625 <para>Note that increasing the number of metadata RPCs issued in parallel
1626 might improve the performance metadata intensive parallel applications,
1627 but as a consequence it will consume more memory on the client and on
1630 <title>Configuring the Client Metadata RPC Stream</title>
1631 <para>The MDC <literal>max_rpcs_in_flight</literal> parameter defines
1632 the maximum number of metadata RPCs, both modifying and
1633 non-modifying RPCs, that can be sent in parallel by a client to a MDT
1634 target. This includes every file system metadata operations, such as
1635 file or directory stat, creation, unlink. The default setting is 8,
1636 minimum setting is 1 and maximum setting is 256.</para>
1637 <para>To set the <literal>max_rpcs_in_flight</literal> parameter, run
1638 the following command on the Lustre client:</para>
1639 <screen>client$ lctl set_param mdc.*.max_rpcs_in_flight=16</screen>
1640 <para>The MDC <literal>max_mod_rpcs_in_flight</literal> parameter
1641 defines the maximum number of file system modifying RPCs that can be
1642 sent in parallel by a client to a MDT target. For example, the Lustre
1643 client sends modify RPCs when it performs file or directory creation,
1644 unlink, access permission modification or ownership modification. The
1645 default setting is 7, minimum setting is 1 and maximum setting is
1647 <para>To set the <literal>max_mod_rpcs_in_flight</literal> parameter,
1648 run the following command on the Lustre client:</para>
1649 <screen>client$ lctl set_param mdc.*.max_mod_rpcs_in_flight=12</screen>
1650 <para>The <literal>max_mod_rpcs_in_flight</literal> value must be
1651 strictly less than the <literal>max_rpcs_in_flight</literal> value.
1652 It must also be less or equal to the MDT
1653 <literal>max_mod_rpcs_per_client</literal> value. If one of theses
1654 conditions is not enforced, the setting fails and an explicit message
1655 is written in the Lustre log.</para>
1656 <para>The MDT <literal>max_mod_rpcs_per_client</literal> parameter is a
1657 tunable of the kernel module <literal>mdt</literal> that defines the
1658 maximum number of file system modifying RPCs in flight allowed per
1659 client. The parameter can be updated at runtime, but the change is
1660 effective to new client connections only. The default setting is 8.
1662 <para>To set the <literal>max_mod_rpcs_per_client</literal> parameter,
1663 run the following command on the MDS:</para>
1664 <screen>mds$ echo 12 > /sys/module/mdt/parameters/max_mod_rpcs_per_client</screen>
1667 <title>Monitoring the Client Metadata RPC Stream</title>
1668 <para>The <literal>rpc_stats</literal> file contains histogram data
1669 showing information about modify metadata RPCs. It can be helpful to
1670 identify the level of parallelism achieved by an application doing
1671 modify metadata operations.</para>
1672 <para><emphasis role="bold">Example:</emphasis></para>
1673 <screen>client$ lctl get_param mdc.*.rpc_stats
1674 snapshot_time: 1441876896.567070 (secs.usecs)
1675 modify_RPCs_in_flight: 0
1678 rpcs in flight rpcs % cum %
1691 12: 4540 18 100</screen>
1692 <para>The file information includes:</para>
1695 <para><literal>snapshot_time</literal> - UNIX epoch instant the
1696 file was read.</para>
1699 <para><literal>modify_RPCs_in_flight</literal> - Number of modify
1700 RPCs issued by the MDC, but not completed at the time of the
1701 snapshot. This value should always be less than or equal to
1702 <literal>max_mod_rpcs_in_flight</literal>.</para>
1705 <para><literal>rpcs in flight</literal> - Number of modify RPCs
1706 that are pending when a RPC is sent, the relative percentage
1707 (<literal>%</literal>) of total modify RPCs, and the cumulative
1708 percentage (<literal>cum %</literal>) to that point.</para>
1711 <para>If a large proportion of modify metadata RPCs are issued with a
1712 number of pending metadata RPCs close to the
1713 <literal>max_mod_rpcs_in_flight</literal> value, it means the
1714 <literal>max_mod_rpcs_in_flight</literal> value could be increased to
1715 improve the modify metadata performance.</para>
1720 <title>Configuring Timeouts in a Lustre File System</title>
1721 <para>In a Lustre file system, RPC timeouts are set using an adaptive timeouts mechanism, which
1722 is enabled by default. Servers track RPC completion times and then report back to clients
1723 estimates for completion times for future RPCs. Clients use these estimates to set RPC
1724 timeout values. If the processing of server requests slows down for any reason, the server
1725 estimates for RPC completion increase, and clients then revise RPC timeout values to allow
1726 more time for RPC completion.</para>
1727 <para>If the RPCs queued on the server approach the RPC timeout specified by the client, to
1728 avoid RPC timeouts and disconnect/reconnect cycles, the server sends an "early reply" to the
1729 client, telling the client to allow more time. Conversely, as server processing speeds up, RPC
1730 timeout values decrease, resulting in faster detection if the server becomes non-responsive
1731 and quicker connection to the failover partner of the server.</para>
1734 <primary>proc</primary>
1735 <secondary>configuring adaptive timeouts</secondary>
1736 </indexterm><indexterm>
1737 <primary>configuring</primary>
1738 <secondary>adaptive timeouts</secondary>
1739 </indexterm><indexterm>
1740 <primary>proc</primary>
1741 <secondary>adaptive timeouts</secondary>
1742 </indexterm>Configuring Adaptive Timeouts</title>
1743 <para>The adaptive timeout parameters in the table below can be set persistently system-wide
1744 using <literal>lctl conf_param</literal> on the MGS. For example, the following command sets
1745 the <literal>at_max</literal> value for all servers and clients associated with the file
1747 <literal>testfs</literal>:<screen>lctl conf_param testfs.sys.at_max=1500</screen></para>
1749 <para>Clients that access multiple Lustre file systems must use the same parameter values
1750 for all file systems.</para>
1752 <informaltable frame="all">
1754 <colspec colname="c1" colwidth="30*"/>
1755 <colspec colname="c2" colwidth="80*"/>
1759 <para><emphasis role="bold">Parameter</emphasis></para>
1762 <para><emphasis role="bold">Description</emphasis></para>
1770 <literal> at_min </literal></para>
1773 <para>Minimum adaptive timeout (in seconds). The default value is 0. The
1774 <literal>at_min</literal> parameter is the minimum processing time that a server
1775 will report. Ideally, <literal>at_min</literal> should be set to its default
1776 value. Clients base their timeouts on this value, but they do not use this value
1778 <para>If, for unknown reasons (usually due to temporary network outages), the
1779 adaptive timeout value is too short and clients time out their RPCs, you can
1780 increase the <literal>at_min</literal> value to compensate for this.</para>
1786 <literal> at_max </literal></para>
1789 <para>Maximum adaptive timeout (in seconds). The <literal>at_max</literal> parameter
1790 is an upper-limit on the service time estimate. If <literal>at_max</literal> is
1791 reached, an RPC request times out.</para>
1792 <para>Setting <literal>at_max</literal> to 0 causes adaptive timeouts to be disabled
1793 and a fixed timeout method to be used instead (see <xref
1794 xmlns:xlink="http://www.w3.org/1999/xlink" linkend="section_c24_nt5_dl"/></para>
1796 <para>If slow hardware causes the service estimate to increase beyond the default
1797 value of <literal>at_max</literal>, increase <literal>at_max</literal> to the
1798 maximum time you are willing to wait for an RPC completion.</para>
1805 <literal> at_history </literal></para>
1808 <para>Time period (in seconds) within which adaptive timeouts remember the slowest
1809 event that occurred. The default is 600.</para>
1815 <literal> at_early_margin </literal></para>
1818 <para>Amount of time before the Lustre server sends an early reply (in seconds).
1819 Default is 5.</para>
1825 <literal> at_extra </literal></para>
1828 <para>Incremental amount of time that a server requests with each early reply (in
1829 seconds). The server does not know how much time the RPC will take, so it asks for
1830 a fixed value. The default is 30, which provides a balance between sending too
1831 many early replies for the same RPC and overestimating the actual completion
1833 <para>When a server finds a queued request about to time out and needs to send an
1834 early reply out, the server adds the <literal>at_extra</literal> value. If the
1835 time expires, the Lustre server drops the request, and the client enters recovery
1836 status and reconnects to restore the connection to normal status.</para>
1837 <para>If you see multiple early replies for the same RPC asking for 30-second
1838 increases, change the <literal>at_extra</literal> value to a larger number to cut
1839 down on early replies sent and, therefore, network load.</para>
1845 <literal> ldlm_enqueue_min </literal></para>
1848 <para>Minimum lock enqueue time (in seconds). The default is 100. The time it takes
1849 to enqueue a lock, <literal>ldlm_enqueue</literal>, is the maximum of the measured
1850 enqueue estimate (influenced by <literal>at_min</literal> and
1851 <literal>at_max</literal> parameters), multiplied by a weighting factor and the
1852 value of <literal>ldlm_enqueue_min</literal>. </para>
1853 <para>Lustre Distributed Lock Manager (LDLM) lock enqueues have a dedicated minimum
1854 value for <literal>ldlm_enqueue_min</literal>. Lock enqueue timeouts increase as
1855 the measured enqueue times increase (similar to adaptive timeouts).</para>
1862 <title>Interpreting Adaptive Timeout Information</title>
1863 <para>Adaptive timeout information can be obtained via
1864 <literal>lctl get_param {osc,mdc}.*.timeouts</literal> files on each
1865 client and <literal>lctl get_param {ost,mds}.*.*.timeouts</literal>
1866 on each server. To read information from a
1867 <literal>timeouts</literal> file, enter a command similar to:</para>
1868 <screen># lctl get_param -n ost.*.ost_io.timeouts
1869 service : cur 33 worst 34 (at 1193427052, 1600s ago) 1 1 33 2</screen>
1870 <para>In this example, the <literal>ost_io</literal> service on this
1871 node is currently reporting an estimated RPC service time of 33
1872 seconds. The worst RPC service time was 34 seconds, which occurred
1873 26 minutes ago.</para>
1874 <para>The output also provides a history of service times.
1875 Four "bins" of adaptive timeout history are shown, with the
1876 maximum RPC time in each bin reported. In both the 0-150s bin and the
1877 150-300s bin, the maximum RPC time was 1. The 300-450s bin shows the
1878 worst (maximum) RPC time at 33 seconds, and the 450-600s bin shows a
1879 maximum of RPC time of 2 seconds. The estimated service time is the
1880 maximum value in the four bins (33 seconds in this example).</para>
1881 <para>Service times (as reported by the servers) are also tracked in
1882 the client OBDs, as shown in this example:</para>
1883 <screen># lctl get_param osc.*.timeouts
1884 last reply : 1193428639, 0d0h00m00s ago
1885 network : cur 1 worst 2 (at 1193427053, 0d0h26m26s ago) 1 1 1 1
1886 portal 6 : cur 33 worst 34 (at 1193427052, 0d0h26m27s ago) 33 33 33 2
1887 portal 28 : cur 1 worst 1 (at 1193426141, 0d0h41m38s ago) 1 1 1 1
1888 portal 7 : cur 1 worst 1 (at 1193426141, 0d0h41m38s ago) 1 0 1 1
1889 portal 17 : cur 1 worst 1 (at 1193426177, 0d0h41m02s ago) 1 0 0 1
1891 <para>In this example, portal 6, the <literal>ost_io</literal> service
1892 portal, shows the history of service estimates reported by the portal.
1894 <para>Server statistic files also show the range of estimates including
1895 min, max, sum, and sum-squared. For example:</para>
1896 <screen># lctl get_param mdt.*.mdt.stats
1898 req_timeout 6 samples [sec] 1 10 15 105
1903 <section xml:id="section_c24_nt5_dl">
1904 <title>Setting Static Timeouts<indexterm>
1905 <primary>proc</primary>
1906 <secondary>static timeouts</secondary>
1907 </indexterm></title>
1908 <para>The Lustre software provides two sets of static (fixed) timeouts, LND timeouts and
1909 Lustre timeouts, which are used when adaptive timeouts are not enabled.</para>
1913 <para><emphasis role="italic"><emphasis role="bold">LND timeouts</emphasis></emphasis> -
1914 LND timeouts ensure that point-to-point communications across a network complete in a
1915 finite time in the presence of failures, such as packages lost or broken connections.
1916 LND timeout parameters are set for each individual LND.</para>
1917 <para>LND timeouts are logged with the <literal>S_LND</literal> flag set. They are not
1918 printed as console messages, so check the Lustre log for <literal>D_NETERROR</literal>
1919 messages or enable printing of <literal>D_NETERROR</literal> messages to the console
1920 using:<screen>lctl set_param printk=+neterror</screen></para>
1921 <para>Congested routers can be a source of spurious LND timeouts. To avoid this
1922 situation, increase the number of LNet router buffers to reduce back-pressure and/or
1923 increase LND timeouts on all nodes on all connected networks. Also consider increasing
1924 the total number of LNet router nodes in the system so that the aggregate router
1925 bandwidth matches the aggregate server bandwidth.</para>
1928 <para><emphasis role="italic"><emphasis role="bold">Lustre timeouts
1929 </emphasis></emphasis>- Lustre timeouts ensure that Lustre RPCs complete in a finite
1930 time in the presence of failures when adaptive timeouts are not enabled. Adaptive
1931 timeouts are enabled by default. To disable adaptive timeouts at run time, set
1932 <literal>at_max</literal> to 0 by running on the
1933 MGS:<screen># lctl conf_param <replaceable>fsname</replaceable>.sys.at_max=0</screen></para>
1935 <para>Changing the status of adaptive timeouts at runtime may cause a transient client
1936 timeout, recovery, and reconnection.</para>
1938 <para>Lustre timeouts are always printed as console messages. </para>
1939 <para>If Lustre timeouts are not accompanied by LND timeouts, increase the Lustre
1940 timeout on both servers and clients. Lustre timeouts are set using a command such as
1941 the following:<screen># lctl set_param timeout=30</screen></para>
1942 <para>Lustre timeout parameters are described in the table below.</para>
1945 <informaltable frame="all">
1947 <colspec colname="c1" colnum="1" colwidth="30*"/>
1948 <colspec colname="c2" colnum="2" colwidth="70*"/>
1951 <entry>Parameter</entry>
1952 <entry>Description</entry>
1957 <entry><literal>timeout</literal></entry>
1959 <para>The time that a client waits for a server to complete an RPC (default 100s).
1960 Servers wait half this time for a normal client RPC to complete and a quarter of
1961 this time for a single bulk request (read or write of up to 4 MB) to complete.
1962 The client pings recoverable targets (MDS and OSTs) at one quarter of the
1963 timeout, and the server waits one and a half times the timeout before evicting a
1964 client for being "stale."</para>
1965 <para>Lustre client sends periodic 'ping' messages to servers with which
1966 it has had no communication for the specified period of time. Any network
1967 activity between a client and a server in the file system also serves as a
1972 <entry><literal>ldlm_timeout</literal></entry>
1974 <para>The time that a server waits for a client to reply to an initial AST (lock
1975 cancellation request). The default is 20s for an OST and 6s for an MDS. If the
1976 client replies to the AST, the server will give it a normal timeout (half the
1977 client timeout) to flush any dirty data and release the lock.</para>
1981 <entry><literal>fail_loc</literal></entry>
1983 <para>An internal debugging failure hook. The default value of
1984 <literal>0</literal> means that no failure will be triggered or
1989 <entry><literal>dump_on_timeout</literal></entry>
1991 <para>Triggers a dump of the Lustre debug log when a timeout occurs. The default
1992 value of <literal>0</literal> (zero) means a dump of the Lustre debug log will
1993 not be triggered.</para>
1997 <entry><literal>dump_on_eviction</literal></entry>
1999 <para>Triggers a dump of the Lustre debug log when an eviction occurs. The default
2000 value of <literal>0</literal> (zero) means a dump of the Lustre debug log will
2001 not be triggered. </para>
2010 <section remap="h3">
2012 <primary>proc</primary>
2013 <secondary>LNet</secondary>
2014 </indexterm><indexterm>
2015 <primary>LNet</primary>
2016 <secondary>proc</secondary>
2017 </indexterm>Monitoring LNet</title>
2018 <para>LNet information is located via <literal>lctl get_param</literal>
2019 in these parameters:
2022 <para><literal>peers</literal> - Shows all NIDs known to this node
2023 and provides information on the queue state.</para>
2024 <para>Example:</para>
2025 <screen># lctl get_param peers
2026 nid refs state max rtr min tx min queue
2027 0@lo 1 ~rtr 0 0 0 0 0 0
2028 192.168.10.35@tcp 1 ~rtr 8 8 8 8 6 0
2029 192.168.10.36@tcp 1 ~rtr 8 8 8 8 6 0
2030 192.168.10.37@tcp 1 ~rtr 8 8 8 8 6 0</screen>
2031 <para>The fields are explained in the table below:</para>
2032 <informaltable frame="all">
2034 <colspec colname="c1" colwidth="30*"/>
2035 <colspec colname="c2" colwidth="80*"/>
2039 <para><emphasis role="bold">Field</emphasis></para>
2042 <para><emphasis role="bold">Description</emphasis></para>
2050 <literal>refs</literal>
2054 <para>A reference count. </para>
2060 <literal>state</literal>
2064 <para>If the node is a router, indicates the state of the router. Possible
2068 <para><literal>NA</literal> - Indicates the node is not a router.</para>
2071 <para><literal>up/down</literal>- Indicates if the node (router) is up or
2080 <literal>max </literal></para>
2083 <para>Maximum number of concurrent sends from this peer.</para>
2089 <literal>rtr </literal></para>
2092 <para>Number of routing buffer credits.</para>
2098 <literal>min </literal></para>
2101 <para>Minimum number of routing buffer credits seen.</para>
2107 <literal>tx </literal></para>
2110 <para>Number of send credits.</para>
2116 <literal>min </literal></para>
2119 <para>Minimum number of send credits seen.</para>
2125 <literal>queue </literal></para>
2128 <para>Total bytes in active/queued sends.</para>
2134 <para>Credits are initialized to allow a certain number of operations (in the example
2135 above the table, eight as shown in the <literal>max</literal> column. LNet keeps track
2136 of the minimum number of credits ever seen over time showing the peak congestion that
2137 has occurred during the time monitored. Fewer available credits indicates a more
2138 congested resource. </para>
2139 <para>The number of credits currently in flight (number of transmit credits) is shown in
2140 the <literal>tx</literal> column. The maximum number of send credits available is shown
2141 in the <literal>max</literal> column and never changes. The number of router buffers
2142 available for consumption by a peer is shown in the <literal>rtr</literal>
2144 <para>Therefore, <literal>rtr</literal> – <literal>tx</literal> is the number of transmits
2145 in flight. Typically, <literal>rtr == max</literal>, although a configuration can be set
2146 such that <literal>max >= rtr</literal>. The ratio of routing buffer credits to send
2147 credits (<literal>rtr/tx</literal>) that is less than <literal>max</literal> indicates
2148 operations are in progress. If the ratio <literal>rtr/tx</literal> is greater than
2149 <literal>max</literal>, operations are blocking.</para>
2150 <para>LNet also limits concurrent sends and number of router buffers allocated to a single
2151 peer so that no peer can occupy all these resources.</para>
2154 <para><literal>nis</literal> - Shows the current queue health on this node.</para>
2155 <para>Example:</para>
2156 <screen># lctl get_param nis
2157 nid refs peer max tx min
2159 192.168.10.34@tcp 4 8 256 256 252
2161 <para> The fields are explained in the table below.</para>
2162 <informaltable frame="all">
2164 <colspec colname="c1" colwidth="30*"/>
2165 <colspec colname="c2" colwidth="80*"/>
2169 <para><emphasis role="bold">Field</emphasis></para>
2172 <para><emphasis role="bold">Description</emphasis></para>
2180 <literal> nid </literal></para>
2183 <para>Network interface.</para>
2189 <literal> refs </literal></para>
2192 <para>Internal reference counter.</para>
2198 <literal> peer </literal></para>
2201 <para>Number of peer-to-peer send credits on this NID. Credits are used to size
2202 buffer pools.</para>
2208 <literal> max </literal></para>
2211 <para>Total number of send credits on this NID.</para>
2217 <literal> tx </literal></para>
2220 <para>Current number of send credits available on this NID.</para>
2226 <literal> min </literal></para>
2229 <para>Lowest number of send credits available on this NID.</para>
2235 <literal> queue </literal></para>
2238 <para>Total bytes in active/queued sends.</para>
2244 <para><emphasis role="bold"><emphasis role="italic">Analysis:</emphasis></emphasis></para>
2245 <para>Subtracting <literal>max</literal> from <literal>tx</literal>
2246 (<literal>max</literal> - <literal>tx</literal>) yields the number of sends currently
2247 active. A large or increasing number of active sends may indicate a problem.</para>
2249 </itemizedlist></para>
2251 <section remap="h3" xml:id="dbdoclet.balancing_free_space">
2253 <primary>proc</primary>
2254 <secondary>free space</secondary>
2255 </indexterm>Allocating Free Space on OSTs</title>
2256 <para>Free space is allocated using either a round-robin or a weighted
2257 algorithm. The allocation method is determined by the maximum amount of
2258 free-space imbalance between the OSTs. When free space is relatively
2259 balanced across OSTs, the faster round-robin allocator is used, which
2260 maximizes network balancing. The weighted allocator is used when any two
2261 OSTs are out of balance by more than a specified threshold.</para>
2262 <para>Free space distribution can be tuned using these two
2263 tunable parameters:</para>
2266 <para><literal>lod.*.qos_threshold_rr</literal> - The threshold at which
2267 the allocation method switches from round-robin to weighted is set
2268 in this file. The default is to switch to the weighted algorithm when
2269 any two OSTs are out of balance by more than 17 percent.</para>
2272 <para><literal>lod.*.qos_prio_free</literal> - The weighting priority
2273 used by the weighted allocator can be adjusted in this file. Increasing
2274 the value of <literal>qos_prio_free</literal> puts more weighting on the
2275 amount of free space available on each OST and less on how stripes are
2276 distributed across OSTs. The default value is 91 percent weighting for
2277 free space rebalancing and 9 percent for OST balancing. When the
2278 free space priority is set to 100, weighting is based entirely on free
2279 space and location is no longer used by the striping algorithm.</para>
2282 <para condition="l29"><literal>osp.*.reserved_mb_low</literal>
2283 - The low watermark used to stop object allocation if available space
2284 is less than this. The default is 0.1% of total OST size.</para>
2287 <para condition="l29"><literal>osp.*.reserved_mb_high</literal>
2288 - The high watermark used to start object allocation if available
2289 space is more than this. The default is 0.2% of total OST size.</para>
2292 <para>For more information about monitoring and managing free space, see <xref
2293 xmlns:xlink="http://www.w3.org/1999/xlink" linkend="dbdoclet.50438209_10424"/>.</para>
2295 <section remap="h3">
2297 <primary>proc</primary>
2298 <secondary>locking</secondary>
2299 </indexterm>Configuring Locking</title>
2300 <para>The <literal>lru_size</literal> parameter is used to control the
2301 number of client-side locks in the LRU cached locks queue. LRU size is
2302 normally dynamic, based on load to optimize the number of locks cached
2303 on nodes that have different workloads (e.g., login/build nodes vs.
2304 compute nodes vs. backup nodes).</para>
2305 <para>The total number of locks available is a function of the server RAM.
2306 The default limit is 50 locks/1 MB of RAM. If memory pressure is too high,
2307 the LRU size is shrunk. The number of locks on the server is limited to
2308 <replaceable>num_osts_per_oss * num_clients * lru_size</replaceable>
2312 <para>To enable automatic LRU sizing, set the
2313 <literal>lru_size</literal> parameter to 0. In this case, the
2314 <literal>lru_size</literal> parameter shows the current number of locks
2315 being used on the client. Dynamic LRU resizing is enabled by default.
2319 <para>To specify a maximum number of locks, set the
2320 <literal>lru_size</literal> parameter to a value other than zero.
2321 A good default value for compute nodes is around
2322 <literal>100 * <replaceable>num_cpus</replaceable></literal>.
2323 It is recommended that you only set <literal>lru_size</literal>
2324 to be signifivantly larger on a few login nodes where multiple
2325 users access the file system interactively.</para>
2328 <para>To clear the LRU on a single client, and, as a result, flush client
2329 cache without changing the <literal>lru_size</literal> value, run:</para>
2330 <screen># lctl set_param ldlm.namespaces.<replaceable>osc_name|mdc_name</replaceable>.lru_size=clear</screen>
2331 <para>If the LRU size is set lower than the number of existing locks,
2332 <emphasis>unused</emphasis> locks are canceled immediately. Use
2333 <literal>clear</literal> to cancel all locks without changing the value.
2336 <para>The <literal>lru_size</literal> parameter can only be set
2337 temporarily using <literal>lctl set_param</literal>, it cannot be set
2340 <para>To disable dynamic LRU resizing on the clients, run for example:
2342 <screen># lctl set_param ldlm.namespaces.*osc*.lru_size=5000</screen>
2343 <para>To determine the number of locks being granted with dynamic LRU
2344 resizing, run:</para>
2345 <screen>$ lctl get_param ldlm.namespaces.*.pool.limit</screen>
2346 <para>The <literal>lru_max_age</literal> parameter is used to control the
2347 age of client-side locks in the LRU cached locks queue. This limits how
2348 long unused locks are cached on the client, and avoids idle clients from
2349 holding locks for an excessive time, which reduces memory usage on both
2350 the client and server, as well as reducing work during server recovery.
2352 <para>The <literal>lru_max_age</literal> is set and printed in milliseconds,
2353 and by default is 3900000 ms (65 minutes).</para>
2354 <para condition='l2B'>Since Lustre 2.11, in addition to setting the
2355 maximum lock age in milliseconds, it can also be set using a suffix of
2356 <literal>s</literal> or <literal>ms</literal> to indicate seconds or
2357 milliseconds, respectively. For example to set the client's maximum
2358 lock age to 15 minutes (900s) run:
2361 # lctl set_param ldlm.namespaces.*MDT*.lru_max_age=900s
2362 # lctl get_param ldlm.namespaces.*MDT*.lru_max_age
2363 ldlm.namespaces.myth-MDT0000-mdc-ffff8804296c2800.lru_max_age=900000
2366 <section xml:id="dbdoclet.50438271_87260">
2368 <primary>proc</primary>
2369 <secondary>thread counts</secondary>
2370 </indexterm>Setting MDS and OSS Thread Counts</title>
2371 <para>MDS and OSS thread counts tunable can be used to set the minimum and maximum thread counts
2372 or get the current number of running threads for the services listed in the table
2374 <informaltable frame="all">
2376 <colspec colname="c1" colwidth="50*"/>
2377 <colspec colname="c2" colwidth="50*"/>
2382 <emphasis role="bold">Service</emphasis></para>
2386 <emphasis role="bold">Description</emphasis></para>
2391 <literal> mds.MDS.mdt </literal>
2394 <para>Main metadata operations service</para>
2399 <literal> mds.MDS.mdt_readpage </literal>
2402 <para>Metadata <literal>readdir</literal> service</para>
2407 <literal> mds.MDS.mdt_setattr </literal>
2410 <para>Metadata <literal>setattr/close</literal> operations service </para>
2415 <literal> ost.OSS.ost </literal>
2418 <para>Main data operations service</para>
2423 <literal> ost.OSS.ost_io </literal>
2426 <para>Bulk data I/O services</para>
2431 <literal> ost.OSS.ost_create </literal>
2434 <para>OST object pre-creation service</para>
2439 <literal> ldlm.services.ldlm_canceld </literal>
2442 <para>DLM lock cancel service</para>
2447 <literal> ldlm.services.ldlm_cbd </literal>
2450 <para>DLM lock grant service</para>
2456 <para>For each service, tunable parameters as shown below are available.
2460 <para>To temporarily set these tunables, run:</para>
2461 <screen># lctl set_param <replaceable>service</replaceable>.threads_<replaceable>min|max|started=num</replaceable> </screen>
2464 <para>To permanently set this tunable, run:</para>
2465 <screen># lctl conf_param <replaceable>obdname|fsname.obdtype</replaceable>.threads_<replaceable>min|max|started</replaceable> </screen>
2466 <para condition='l25'>For version 2.5 or later, run:
2467 <screen># lctl set_param -P <replaceable>service</replaceable>.threads_<replaceable>min|max|started</replaceable></screen></para>
2470 <para>The following examples show how to set thread counts and get the number of running threads
2471 for the service <literal>ost_io</literal> using the tunable
2472 <literal><replaceable>service</replaceable>.threads_<replaceable>min|max|started</replaceable></literal>.</para>
2475 <para>To get the number of running threads, run:</para>
2476 <screen># lctl get_param ost.OSS.ost_io.threads_started
2477 ost.OSS.ost_io.threads_started=128</screen>
2480 <para>To set the number of threads to the maximum value (512), run:</para>
2481 <screen># lctl get_param ost.OSS.ost_io.threads_max
2482 ost.OSS.ost_io.threads_max=512</screen>
2485 <para>To set the maximum thread count to 256 instead of 512 (to avoid overloading the
2486 storage or for an array with requests), run:</para>
2487 <screen># lctl set_param ost.OSS.ost_io.threads_max=256
2488 ost.OSS.ost_io.threads_max=256</screen>
2491 <para>To set the maximum thread count to 256 instead of 512 permanently, run:</para>
2492 <screen># lctl conf_param testfs.ost.ost_io.threads_max=256</screen>
2493 <para condition='l25'>For version 2.5 or later, run:
2494 <screen># lctl set_param -P ost.OSS.ost_io.threads_max=256
2495 ost.OSS.ost_io.threads_max=256 </screen> </para>
2498 <para> To check if the <literal>threads_max</literal> setting is active, run:</para>
2499 <screen># lctl get_param ost.OSS.ost_io.threads_max
2500 ost.OSS.ost_io.threads_max=256</screen>
2504 <para>If the number of service threads is changed while the file system is running, the change
2505 may not take effect until the file system is stopped and rest. If the number of service
2506 threads in use exceeds the new <literal>threads_max</literal> value setting, service threads
2507 that are already running will not be stopped.</para>
2509 <para>See also <xref xmlns:xlink="http://www.w3.org/1999/xlink" linkend="lustretuning"/></para>
2511 <section xml:id="dbdoclet.50438271_83523">
2513 <primary>proc</primary>
2514 <secondary>debug</secondary>
2515 </indexterm>Enabling and Interpreting Debugging Logs</title>
2516 <para>By default, a detailed log of all operations is generated to aid in
2517 debugging. Flags that control debugging are found via
2518 <literal>lctl get_param debug</literal>.</para>
2519 <para>The overhead of debugging can affect the performance of Lustre file
2520 system. Therefore, to minimize the impact on performance, the debug level
2521 can be lowered, which affects the amount of debugging information kept in
2522 the internal log buffer but does not alter the amount of information to
2523 goes into syslog. You can raise the debug level when you need to collect
2524 logs to debug problems. </para>
2525 <para>The debugging mask can be set using "symbolic names". The
2526 symbolic format is shown in the examples below.
2529 <para>To verify the debug level used, examine the parameter that
2530 controls debugging by running:</para>
2531 <screen># lctl get_param debug
2533 ioctl neterror warning error emerg ha config console</screen>
2536 <para>To turn off debugging except for network error debugging, run
2537 the following command on all nodes concerned:</para>
2538 <screen># sysctl -w lnet.debug="neterror"
2539 debug=neterror</screen>
2544 <para>To turn off debugging completely (except for the minimum error
2545 reporting to the console), run the following command on all nodes
2547 <screen># lctl set_param debug=0
2551 <para>To set an appropriate debug level for a production environment,
2553 <screen># lctl set_param debug="warning dlmtrace error emerg ha rpctrace vfstrace"
2554 debug=warning dlmtrace error emerg ha rpctrace vfstrace</screen>
2555 <para>The flags shown in this example collect enough high-level
2556 information to aid debugging, but they do not cause any serious
2557 performance impact.</para>
2562 <para>To add new flags to flags that have already been set,
2563 precede each one with a "<literal>+</literal>":</para>
2564 <screen># lctl set_param debug="+neterror +ha"
2566 # lctl get_param debug
2567 debug=neterror warning error emerg ha console</screen>
2570 <para>To remove individual flags, precede them with a
2571 "<literal>-</literal>":</para>
2572 <screen># lctl set_param debug="-ha"
2574 # lctl get_param debug
2575 debug=neterror warning error emerg console</screen>
2579 <para>Debugging parameters include:</para>
2582 <para><literal>subsystem_debug</literal> - Controls the debug logs for subsystems.</para>
2585 <para><literal>debug_path</literal> - Indicates the location where the debug log is dumped
2586 when triggered automatically or manually. The default path is
2587 <literal>/tmp/lustre-log</literal>.</para>
2590 <para>These parameters can also be set using:<screen>sysctl -w lnet.debug={value}</screen></para>
2591 <para>Additional useful parameters: <itemizedlist>
2593 <para><literal>panic_on_lbug</literal> - Causes ''panic'' to be called
2594 when the Lustre software detects an internal problem (an <literal>LBUG</literal> log
2595 entry); panic crashes the node. This is particularly useful when a kernel crash dump
2596 utility is configured. The crash dump is triggered when the internal inconsistency is
2597 detected by the Lustre software. </para>
2600 <para><literal>upcall</literal> - Allows you to specify the path to the binary which will
2601 be invoked when an <literal>LBUG</literal> log entry is encountered. This binary is
2602 called with four parameters:</para>
2603 <para> - The string ''<literal>LBUG</literal>''.</para>
2604 <para> - The file where the <literal>LBUG</literal> occurred.</para>
2605 <para> - The function name.</para>
2606 <para> - The line number in the file</para>
2608 </itemizedlist></para>
2610 <title>Interpreting OST Statistics</title>
2613 <xref linkend="dbdoclet.50438273_80593"/> (<literal>collectl</literal>).</para>
2615 <para>OST <literal>stats</literal> files can be used to provide statistics showing activity
2616 for each OST. For example:</para>
2617 <screen># lctl get_param osc.testfs-OST0000-osc.stats
2618 snapshot_time 1189732762.835363
2623 obd_ping 212</screen>
2624 <para>Use the <literal>llstat</literal> utility to monitor statistics over time.</para>
2625 <para>To clear the statistics, use the <literal>-c</literal> option to
2626 <literal>llstat</literal>. To specify how frequently the statistics
2627 should be reported (in seconds), use the <literal>-i</literal> option.
2628 In the example below, the <literal>-c</literal> option clears the
2629 statistics and <literal>-i10</literal> option reports statistics every
2631 <screen role="smaller">$ llstat -c -i10 ost_io
2633 /usr/bin/llstat: STATS on 06/06/07
2634 /proc/fs/lustre/ost/OSS/ost_io/ stats on 192.168.16.35@tcp
2635 snapshot_time 1181074093.276072
2637 /proc/fs/lustre/ost/OSS/ost_io/stats @ 1181074103.284895
2639 Count Rate Events Unit last min avg max stddev
2640 req_waittime 8 0 8 [usec] 2078 34 259.75 868 317.49
2641 req_qdepth 8 0 8 [reqs] 1 0 0.12 1 0.35
2642 req_active 8 0 8 [reqs] 11 1 1.38 2 0.52
2643 reqbuf_avail 8 0 8 [bufs] 511 63 63.88 64 0.35
2644 ost_write 8 0 8 [bytes] 169767 72914 212209.62 387579 91874.29
2646 /proc/fs/lustre/ost/OSS/ost_io/stats @ 1181074113.290180
2648 Count Rate Events Unit last min avg max stddev
2649 req_waittime 31 3 39 [usec] 30011 34 822.79 12245 2047.71
2650 req_qdepth 31 3 39 [reqs] 0 0 0.03 1 0.16
2651 req_active 31 3 39 [reqs] 58 1 1.77 3 0.74
2652 reqbuf_avail 31 3 39 [bufs] 1977 63 63.79 64 0.41
2653 ost_write 30 3 38 [bytes] 1028467 15019 315325.16 910694 197776.51
2655 /proc/fs/lustre/ost/OSS/ost_io/stats @ 1181074123.325560
2657 Count Rate Events Unit last min avg max stddev
2658 req_waittime 21 2 60 [usec] 14970 34 784.32 12245 1878.66
2659 req_qdepth 21 2 60 [reqs] 0 0 0.02 1 0.13
2660 req_active 21 2 60 [reqs] 33 1 1.70 3 0.70
2661 reqbuf_avail 21 2 60 [bufs] 1341 63 63.82 64 0.39
2662 ost_write 21 2 59 [bytes] 7648424 15019 332725.08 910694 180397.87
2664 <para>The columns in this example are described in the table below.</para>
2665 <informaltable frame="all">
2667 <colspec colname="c1" colwidth="50*"/>
2668 <colspec colname="c2" colwidth="50*"/>
2672 <para><emphasis role="bold">Parameter</emphasis></para>
2675 <para><emphasis role="bold">Description</emphasis></para>
2681 <entry><literal>Name</literal></entry>
2682 <entry>Name of the service event. See the tables below for descriptions of service
2683 events that are tracked.</entry>
2688 <literal>Cur. Count </literal></para>
2691 <para>Number of events of each type sent in the last interval.</para>
2697 <literal>Cur. Rate </literal></para>
2700 <para>Number of events per second in the last interval.</para>
2706 <literal> # Events </literal></para>
2709 <para>Total number of such events since the events have been cleared.</para>
2715 <literal> Unit </literal></para>
2718 <para>Unit of measurement for that statistic (microseconds, requests,
2725 <literal> last </literal></para>
2728 <para>Average rate of these events (in units/event) for the last interval during
2729 which they arrived. For instance, in the above mentioned case of
2730 <literal>ost_destroy</literal> it took an average of 736 microseconds per
2731 destroy for the 400 object destroys in the previous 10 seconds.</para>
2737 <literal> min </literal></para>
2740 <para>Minimum rate (in units/events) since the service started.</para>
2746 <literal> avg </literal></para>
2749 <para>Average rate.</para>
2755 <literal> max </literal></para>
2758 <para>Maximum rate.</para>
2764 <literal> stddev </literal></para>
2767 <para>Standard deviation (not measured in some cases)</para>
2773 <para>Events common to all services are shown in the table below.</para>
2774 <informaltable frame="all">
2776 <colspec colname="c1" colwidth="50*"/>
2777 <colspec colname="c2" colwidth="50*"/>
2781 <para><emphasis role="bold">Parameter</emphasis></para>
2784 <para><emphasis role="bold">Description</emphasis></para>
2792 <literal> req_waittime </literal></para>
2795 <para>Amount of time a request waited in the queue before being handled by an
2796 available server thread.</para>
2802 <literal> req_qdepth </literal></para>
2805 <para>Number of requests waiting to be handled in the queue for this service.</para>
2811 <literal> req_active </literal></para>
2814 <para>Number of requests currently being handled.</para>
2820 <literal> reqbuf_avail </literal></para>
2823 <para>Number of unsolicited lnet request buffers for this service.</para>
2829 <para>Some service-specific events of interest are described in the table below.</para>
2830 <informaltable frame="all">
2832 <colspec colname="c1" colwidth="50*"/>
2833 <colspec colname="c2" colwidth="50*"/>
2837 <para><emphasis role="bold">Parameter</emphasis></para>
2840 <para><emphasis role="bold">Description</emphasis></para>
2848 <literal> ldlm_enqueue </literal></para>
2851 <para>Time it takes to enqueue a lock (this includes file open on the MDS)</para>
2857 <literal> mds_reint </literal></para>
2860 <para>Time it takes to process an MDS modification record (includes
2861 <literal>create</literal>, <literal>mkdir</literal>, <literal>unlink</literal>,
2862 <literal>rename</literal> and <literal>setattr</literal>)</para>
2870 <title>Interpreting MDT Statistics</title>
2873 <xref linkend="dbdoclet.50438273_80593"/> (<literal>collectl</literal>).</para>
2875 <para>MDT <literal>stats</literal> files can be used to track MDT
2876 statistics for the MDS. The example below shows sample output from an
2877 MDT <literal>stats</literal> file.</para>
2878 <screen># lctl get_param mds.*-MDT0000.stats
2879 snapshot_time 1244832003.676892 secs.usecs
2880 open 2 samples [reqs]
2881 close 1 samples [reqs]
2882 getxattr 3 samples [reqs]
2883 process_config 1 samples [reqs]
2884 connect 2 samples [reqs]
2885 disconnect 2 samples [reqs]
2886 statfs 3 samples [reqs]
2887 setattr 1 samples [reqs]
2888 getattr 3 samples [reqs]
2889 llog_init 6 samples [reqs]
2890 notify 16 samples [reqs]</screen>
2895 vim:expandtab:shiftwidth=2:tabstop=8: