1 <?xml version='1.0' encoding='UTF-8'?>
2 <chapter xmlns="http://docbook.org/ns/docbook" xmlns:xl="http://www.w3.org/1999/xlink" version="5.0"
3 xml:lang="en-US" xml:id="lustreproc">
4 <title xml:id="lustreproc.title">Lustre Parameters</title>
5 <para>The <literal>/proc</literal> and <literal>/sys</literal> file systems
6 acts as an interface to internal data structures in the kernel. This chapter
7 describes parameters and tunables that are useful for optimizing and
8 monitoring aspects of a Lustre file system. It includes these sections:</para>
11 <para><xref linkend="dbdoclet.50438271_83523"/></para>
16 <title>Introduction to Lustre Parameters</title>
17 <para>Lustre parameters and statistics files provide an interface to
18 internal data structures in the kernel that enables monitoring and
19 tuning of many aspects of Lustre file system and application performance.
20 These data structures include settings and metrics for components such
21 as memory, networking, file systems, and kernel housekeeping routines,
22 which are available throughout the hierarchical file layout.
24 <para>Typically, metrics are accessed via <literal>lctl get_param</literal>
25 files and settings are changed by via <literal>lctl set_param</literal>.
26 Some data is server-only, some data is client-only, and some data is
27 exported from the client to the server and is thus duplicated in both
30 <para>In the examples in this chapter, <literal>#</literal> indicates
31 a command is entered as root. Lustre servers are named according to the
32 convention <literal><replaceable>fsname</replaceable>-<replaceable>MDT|OSTnumber</replaceable></literal>.
33 The standard UNIX wildcard designation (*) is used.</para>
35 <para>Some examples are shown below:</para>
38 <para> To obtain data from a Lustre client:</para>
39 <screen># lctl list_param osc.*
40 osc.testfs-OST0000-osc-ffff881071d5cc00
41 osc.testfs-OST0001-osc-ffff881071d5cc00
42 osc.testfs-OST0002-osc-ffff881071d5cc00
43 osc.testfs-OST0003-osc-ffff881071d5cc00
44 osc.testfs-OST0004-osc-ffff881071d5cc00
45 osc.testfs-OST0005-osc-ffff881071d5cc00
46 osc.testfs-OST0006-osc-ffff881071d5cc00
47 osc.testfs-OST0007-osc-ffff881071d5cc00
48 osc.testfs-OST0008-osc-ffff881071d5cc00</screen>
49 <para>In this example, information about OST connections available
50 on a client is displayed (indicated by "osc").</para>
55 <para> To see multiple levels of parameters, use multiple
56 wildcards:<screen># lctl list_param osc.*.*
57 osc.testfs-OST0000-osc-ffff881071d5cc00.active
58 osc.testfs-OST0000-osc-ffff881071d5cc00.blocksize
59 osc.testfs-OST0000-osc-ffff881071d5cc00.checksum_type
60 osc.testfs-OST0000-osc-ffff881071d5cc00.checksums
61 osc.testfs-OST0000-osc-ffff881071d5cc00.connect_flags
62 osc.testfs-OST0000-osc-ffff881071d5cc00.contention_seconds
63 osc.testfs-OST0000-osc-ffff881071d5cc00.cur_dirty_bytes
65 osc.testfs-OST0000-osc-ffff881071d5cc00.rpc_stats</screen></para>
70 <para> To view a specific file, use <literal>lctl get_param</literal>:
71 <screen># lctl get_param osc.lustre-OST0000*.rpc_stats</screen></para>
74 <para>For more information about using <literal>lctl</literal>, see <xref
75 xmlns:xlink="http://www.w3.org/1999/xlink" linkend="dbdoclet.50438194_51490"/>.</para>
76 <para>Data can also be viewed using the <literal>cat</literal> command
77 with the full path to the file. The form of the <literal>cat</literal>
78 command is similar to that of the <literal>lctl get_param</literal>
79 command with some differences. Unfortunately, as the Linux kernel has
80 changed over the years, the location of statistics and parameter files
81 has also changed, which means that the Lustre parameter files may be
82 located in either the <literal>/proc</literal> directory, in the
83 <literal>/sys</literal> directory, and/or in the
84 <literal>/sys/kernel/debug</literal> directory, depending on the kernel
85 version and the Lustre version being used. The <literal>lctl</literal>
86 command insulates scripts from these changes and is preferred over direct
87 file access, unless as part of a high-performance monitoring system.
88 In the <literal>cat</literal> command:</para>
91 <para>Replace the dots in the path with slashes.</para>
94 <para>Prepend the path with the following as appropriate:
95 <screen>/{proc,sys}/{fs,sys}/{lustre,lnet}</screen></para>
98 <para>For example, an <literal>lctl get_param</literal> command may look like
99 this:<screen># lctl get_param osc.*.uuid
100 osc.testfs-OST0000-osc-ffff881071d5cc00.uuid=594db456-0685-bd16-f59b-e72ee90e9819
101 osc.testfs-OST0001-osc-ffff881071d5cc00.uuid=594db456-0685-bd16-f59b-e72ee90e9819
103 <para>The equivalent <literal>cat</literal> command may look like this:
104 <screen># cat /proc/fs/lustre/osc/*/uuid
105 594db456-0685-bd16-f59b-e72ee90e9819
106 594db456-0685-bd16-f59b-e72ee90e9819
109 <screen># cat /sys/fs/lustre/osc/*/uuid
110 594db456-0685-bd16-f59b-e72ee90e9819
111 594db456-0685-bd16-f59b-e72ee90e9819
113 <para>The <literal>llstat</literal> utility can be used to monitor some
114 Lustre file system I/O activity over a specified time period. For more
116 <xref xmlns:xlink="http://www.w3.org/1999/xlink" linkend="dbdoclet.50438219_23232"/></para>
117 <para>Some data is imported from attached clients and is available in a
118 directory called <literal>exports</literal> located in the corresponding
119 per-service directory on a Lustre server. For example:
120 <screen>oss:/root# lctl list_param obdfilter.testfs-OST0000.exports.*
121 # hash ldlm_stats stats uuid</screen></para>
123 <title>Identifying Lustre File Systems and Servers</title>
124 <para>Several <literal>/proc</literal> files on the MGS list existing
125 Lustre file systems and file system servers. The examples below are for
126 a Lustre file system called
127 <literal>testfs</literal> with one MDT and three OSTs.</para>
130 <para> To view all known Lustre file systems, enter:</para>
131 <screen>mgs# lctl get_param mgs.*.filesystems
135 <para> To view the names of the servers in a file system in which least one server is
137 enter:<screen>lctl get_param mgs.*.live.<replaceable><filesystem name></replaceable></screen></para>
138 <para>For example:</para>
139 <screen>mgs# lctl get_param mgs.*.live.testfs
147 Secure RPC Config Rules:
149 imperative_recovery_state:
153 notify_duration_total: 0.001000
154 notify_duation_max: 0.001000
155 notify_count: 4</screen>
158 <para>To view the names of all live servers in the file system as listed in
159 <literal>/proc/fs/lustre/devices</literal>, enter:</para>
160 <screen># lctl device_list
162 1 UP mgc MGC192.168.10.34@tcp 1f45bb57-d9be-2ddb-c0b0-5431a49226705
163 2 UP mdt MDS MDS_uuid 3
164 3 UP lov testfs-mdtlov testfs-mdtlov_UUID 4
165 4 UP mds testfs-MDT0000 testfs-MDT0000_UUID 7
166 5 UP osc testfs-OST0000-osc testfs-mdtlov_UUID 5
167 6 UP osc testfs-OST0001-osc testfs-mdtlov_UUID 5
168 7 UP lov testfs-clilov-ce63ca00 08ac6584-6c4a-3536-2c6d-b36cf9cbdaa04
169 8 UP mdc testfs-MDT0000-mdc-ce63ca00 08ac6584-6c4a-3536-2c6d-b36cf9cbdaa05
170 9 UP osc testfs-OST0000-osc-ce63ca00 08ac6584-6c4a-3536-2c6d-b36cf9cbdaa05
171 10 UP osc testfs-OST0001-osc-ce63ca00 08ac6584-6c4a-3536-2c6d-b36cf9cbdaa05</screen>
172 <para>The information provided on each line includes:</para>
173 <para> - Device number</para>
174 <para> - Device status (UP, INactive, or STopping) </para>
175 <para> - Device name</para>
176 <para> - Device UUID</para>
177 <para> - Reference count (how many users this device has)</para>
180 <para>To display the name of any server, view the device
181 label:<screen>mds# e2label /dev/sda
182 testfs-MDT0000</screen></para>
188 <title>Tuning Multi-Block Allocation (mballoc)</title>
189 <para>Capabilities supported by <literal>mballoc</literal> include:</para>
192 <para> Pre-allocation for single files to help to reduce fragmentation.</para>
195 <para> Pre-allocation for a group of files to enable packing of small files into large,
196 contiguous chunks.</para>
199 <para> Stream allocation to help decrease the seek rate.</para>
202 <para>The following <literal>mballoc</literal> tunables are available:</para>
203 <informaltable frame="all">
205 <colspec colname="c1" colwidth="30*"/>
206 <colspec colname="c2" colwidth="70*"/>
210 <para><emphasis role="bold">Field</emphasis></para>
213 <para><emphasis role="bold">Description</emphasis></para>
221 <literal>mb_max_to_scan</literal></para>
224 <para>Maximum number of free chunks that <literal>mballoc</literal> finds before a
225 final decision to avoid a livelock situation.</para>
231 <literal>mb_min_to_scan</literal></para>
234 <para>Minimum number of free chunks that <literal>mballoc</literal> searches before
235 picking the best chunk for allocation. This is useful for small requests to reduce
236 fragmentation of big free chunks.</para>
242 <literal>mb_order2_req</literal></para>
245 <para>For requests equal to 2^N, where N >= <literal>mb_order2_req</literal>, a
246 fast search is done using a base 2 buddy allocation service.</para>
252 <literal>mb_small_req</literal></para>
255 <para><literal>mb_small_req</literal> - Defines (in MB) the upper bound of "small
257 <para><literal>mb_large_req</literal> - Defines (in MB) the lower bound of "large
259 <para>Requests are handled differently based on size:<itemizedlist>
261 <para>< <literal>mb_small_req</literal> - Requests are packed together to
262 form large, aggregated requests.</para>
265 <para>> <literal>mb_small_req</literal> and < <literal>mb_large_req</literal>
266 - Requests are primarily allocated linearly.</para>
269 <para>> <literal>mb_large_req</literal> - Requests are allocated since hard disk
270 seek time is less of a concern in this case.</para>
272 </itemizedlist></para>
273 <para>In general, small requests are combined to create larger requests, which are
274 then placed close to one another to minimize the number of seeks required to access
281 <literal>mb_large_req</literal></para>
287 <literal>mb_prealloc_table</literal></para>
290 <para>A table of values used to preallocate space when a new request is received. By
291 default, the table looks like
292 this:<screen>prealloc_table
293 4 8 16 32 64 128 256 512 1024 2048 </screen></para>
294 <para>When a new request is received, space is preallocated at the next higher
295 increment specified in the table. For example, for requests of less than 4 file
296 system blocks, 4 blocks of space are preallocated; for requests between 4 and 8, 8
297 blocks are preallocated; and so forth</para>
298 <para>Although customized values can be entered in the table, the performance of
299 general usage file systems will not typically be improved by modifying the table (in
300 fact, in ext4 systems, the table values are fixed). However, for some specialized
301 workloads, tuning the <literal>prealloc_table</literal> values may result in smarter
302 preallocation decisions. </para>
308 <literal>mb_group_prealloc</literal></para>
311 <para>The amount of space (in kilobytes) preallocated for groups of small
318 <para>Buddy group cache information found in
319 <literal>/proc/fs/ldiskfs/<replaceable>disk_device</replaceable>/mb_groups</literal> may
320 be useful for assessing on-disk fragmentation. For
321 example:<screen>cat /proc/fs/ldiskfs/loop0/mb_groups
322 #group: free free frags first pa [ 2^0 2^1 2^2 2^3 2^4 2^5 2^6 2^7 2^8 2^9
324 #0 : 2936 2936 1 42 0 [ 0 0 0 1 1 1 1 2 0 1
325 2 0 0 0 ]</screen></para>
326 <para>In this example, the columns show:<itemizedlist>
328 <para>#group number</para>
331 <para>Available blocks in the group</para>
334 <para>Blocks free on a disk</para>
337 <para>Number of free fragments</para>
340 <para>First free block in the group</para>
343 <para>Number of preallocated chunks (not blocks)</para>
346 <para>A series of available chunks of different sizes</para>
348 </itemizedlist></para>
351 <title>Monitoring Lustre File System I/O</title>
352 <para>A number of system utilities are provided to enable collection of data related to I/O
353 activity in a Lustre file system. In general, the data collected describes:</para>
356 <para> Data transfer rates and throughput of inputs and outputs external to the Lustre file
357 system, such as network requests or disk I/O operations performed</para>
360 <para> Data about the throughput or transfer rates of internal Lustre file system data, such
361 as locks or allocations. </para>
365 <para>It is highly recommended that you complete baseline testing for your Lustre file system
366 to determine normal I/O activity for your hardware, network, and system workloads. Baseline
367 data will allow you to easily determine when performance becomes degraded in your system.
368 Two particularly useful baseline statistics are:</para>
371 <para><literal>brw_stats</literal> – Histogram data characterizing I/O requests to the
372 OSTs. For more details, see <xref xmlns:xlink="http://www.w3.org/1999/xlink"
373 linkend="dbdoclet.50438271_55057"/>.</para>
376 <para><literal>rpc_stats</literal> – Histogram data showing information about RPCs made by
377 clients. For more details, see <xref xmlns:xlink="http://www.w3.org/1999/xlink"
378 linkend="MonitoringClientRCPStream"/>.</para>
382 <section remap="h3" xml:id="MonitoringClientRCPStream">
384 <primary>proc</primary>
385 <secondary>watching RPC</secondary>
386 </indexterm>Monitoring the Client RPC Stream</title>
387 <para>The <literal>rpc_stats</literal> file contains histogram data showing information about
388 remote procedure calls (RPCs) that have been made since this file was last cleared. The
389 histogram data can be cleared by writing any value into the <literal>rpc_stats</literal>
391 <para><emphasis role="italic"><emphasis role="bold">Example:</emphasis></emphasis></para>
392 <screen># lctl get_param osc.testfs-OST0000-osc-ffff810058d2f800.rpc_stats
393 snapshot_time: 1372786692.389858 (secs.usecs)
394 read RPCs in flight: 0
395 write RPCs in flight: 1
396 dio read RPCs in flight: 0
397 dio write RPCs in flight: 0
398 pending write pages: 256
399 pending read pages: 0
402 pages per rpc rpcs % cum % | rpcs % cum %
411 256: 850 100 100 | 18346 99 100
414 rpcs in flight rpcs % cum % | rpcs % cum %
415 0: 691 81 81 | 1740 9 9
416 1: 48 5 86 | 938 5 14
417 2: 29 3 90 | 1059 5 20
418 3: 17 2 92 | 1052 5 26
419 4: 13 1 93 | 920 5 31
420 5: 12 1 95 | 425 2 33
421 6: 10 1 96 | 389 2 35
422 7: 30 3 100 | 11373 61 97
423 8: 0 0 100 | 460 2 100
426 offset rpcs % cum % | rpcs % cum %
427 0: 850 100 100 | 18347 99 99
435 128: 0 0 100 | 4 0 100
438 <para>The header information includes:</para>
441 <para><literal>snapshot_time</literal> - UNIX epoch instant the file was read.</para>
444 <para><literal>read RPCs in flight</literal> - Number of read RPCs issued by the OSC, but
445 not complete at the time of the snapshot. This value should always be less than or equal
446 to <literal>max_rpcs_in_flight</literal>.</para>
449 <para><literal>write RPCs in flight</literal> - Number of write RPCs issued by the OSC,
450 but not complete at the time of the snapshot. This value should always be less than or
451 equal to <literal>max_rpcs_in_flight</literal>.</para>
454 <para><literal>dio read RPCs in flight</literal> - Direct I/O (as opposed to block I/O)
455 read RPCs issued but not completed at the time of the snapshot.</para>
458 <para><literal>dio write RPCs in flight</literal> - Direct I/O (as opposed to block I/O)
459 write RPCs issued but not completed at the time of the snapshot.</para>
462 <para><literal>pending write pages</literal> - Number of pending write pages that have
463 been queued for I/O in the OSC.</para>
466 <para><literal>pending read pages</literal> - Number of pending read pages that have been
467 queued for I/O in the OSC.</para>
470 <para>The tabular data is described in the table below. Each row in the table shows the number
471 of reads or writes (<literal>ios</literal>) occurring for the statistic, the relative
472 percentage (<literal>%</literal>) of total reads or writes, and the cumulative percentage
473 (<literal>cum %</literal>) to that point in the table for the statistic.</para>
474 <informaltable frame="all">
476 <colspec colname="c1" colwidth="40*"/>
477 <colspec colname="c2" colwidth="60*"/>
481 <para><emphasis role="bold">Field</emphasis></para>
484 <para><emphasis role="bold">Description</emphasis></para>
491 <para> pages per RPC</para>
494 <para>Shows cumulative RPC reads and writes organized according to the number of
495 pages in the RPC. A single page RPC increments the <literal>0:</literal>
501 <para> RPCs in flight</para>
504 <para> Shows the number of RPCs that are pending when an RPC is sent. When the first
505 RPC is sent, the <literal>0:</literal> row is incremented. If the first RPC is
506 sent while another RPC is pending, the <literal>1:</literal> row is incremented
515 <para> The page index of the first page read from or written to the object by the
522 <para><emphasis role="italic"><emphasis role="bold">Analysis:</emphasis></emphasis></para>
523 <para>This table provides a way to visualize the concurrency of the RPC stream. Ideally, you
524 will see a large clump around the <literal>max_rpcs_in_flight value</literal>, which shows
525 that the network is being kept busy.</para>
526 <para>For information about optimizing the client I/O RPC stream, see <xref
527 xmlns:xlink="http://www.w3.org/1999/xlink" linkend="TuningClientIORPCStream"/>.</para>
529 <section xml:id="lustreproc.clientstats" remap="h3">
531 <primary>proc</primary>
532 <secondary>client stats</secondary>
533 </indexterm>Monitoring Client Activity</title>
534 <para>The <literal>stats</literal> file maintains statistics accumulate during typical
535 operation of a client across the VFS interface of the Lustre file system. Only non-zero
536 parameters are displayed in the file. </para>
537 <para>Client statistics are enabled by default.</para>
539 <para>Statistics for all mounted file systems can be discovered by
540 entering:<screen>lctl get_param llite.*.stats</screen></para>
542 <para><emphasis role="italic"><emphasis role="bold">Example:</emphasis></emphasis></para>
543 <screen>client# lctl get_param llite.*.stats
544 snapshot_time 1308343279.169704 secs.usecs
545 dirty_pages_hits 14819716 samples [regs]
546 dirty_pages_misses 81473472 samples [regs]
547 read_bytes 36502963 samples [bytes] 1 26843582 55488794
548 write_bytes 22985001 samples [bytes] 0 125912 3379002
549 brw_read 2279 samples [pages] 1 1 2270
550 ioctl 186749 samples [regs]
551 open 3304805 samples [regs]
552 close 3331323 samples [regs]
553 seek 48222475 samples [regs]
554 fsync 963 samples [regs]
555 truncate 9073 samples [regs]
556 setxattr 19059 samples [regs]
557 getxattr 61169 samples [regs]
559 <para> The statistics can be cleared by echoing an empty string into the
560 <literal>stats</literal> file or by using the command:
561 <screen>lctl set_param llite.*.stats=0</screen></para>
562 <para>The statistics displayed are described in the table below.</para>
563 <informaltable frame="all">
565 <colspec colname="c1" colwidth="3*"/>
566 <colspec colname="c2" colwidth="7*"/>
570 <para><emphasis role="bold">Entry</emphasis></para>
573 <para><emphasis role="bold">Description</emphasis></para>
581 <literal>snapshot_time</literal></para>
584 <para>UNIX epoch instant the stats file was read.</para>
590 <literal>dirty_page_hits</literal></para>
593 <para>The number of write operations that have been satisfied by the dirty page
594 cache. See <xref xmlns:xlink="http://www.w3.org/1999/xlink"
595 linkend="TuningClientIORPCStream"/> for more information about dirty cache
596 behavior in a Lustre file system.</para>
602 <literal>dirty_page_misses</literal></para>
605 <para>The number of write operations that were not satisfied by the dirty page
612 <literal>read_bytes</literal></para>
615 <para>The number of read operations that have occurred. Three additional parameters
616 are displayed:</para>
621 <para>The minimum number of bytes read in a single request since the counter
628 <para>The maximum number of bytes read in a single request since the counter
635 <para>The accumulated sum of bytes of all read requests since the counter was
645 <literal>write_bytes</literal></para>
648 <para>The number of write operations that have occurred. Three additional parameters
649 are displayed:</para>
654 <para>The minimum number of bytes written in a single request since the
655 counter was reset.</para>
661 <para>The maximum number of bytes written in a single request since the
662 counter was reset.</para>
668 <para>The accumulated sum of bytes of all write requests since the counter was
678 <literal>brw_read</literal></para>
681 <para>The number of pages that have been read. Three additional parameters are
687 <para>The minimum number of bytes read in a single block read/write
688 (<literal>brw</literal>) read request since the counter was reset.</para>
694 <para>The maximum number of bytes read in a single <literal>brw</literal> read
695 requests since the counter was reset.</para>
701 <para>The accumulated sum of bytes of all <literal>brw</literal> read requests
702 since the counter was reset.</para>
711 <literal>ioctl</literal></para>
714 <para>The number of combined file and directory <literal>ioctl</literal>
721 <literal>open</literal></para>
724 <para>The number of open operations that have succeeded.</para>
730 <literal>close</literal></para>
733 <para>The number of close operations that have succeeded.</para>
739 <literal>seek</literal></para>
742 <para>The number of times <literal>seek</literal> has been called.</para>
748 <literal>fsync</literal></para>
751 <para>The number of times <literal>fsync</literal> has been called.</para>
757 <literal>truncate</literal></para>
760 <para>The total number of calls to both locked and lockless
761 <literal>truncate</literal>.</para>
767 <literal>setxattr</literal></para>
770 <para>The number of times extended attributes have been set. </para>
776 <literal>getxattr</literal></para>
779 <para>The number of times value(s) of extended attributes have been fetched.</para>
785 <para><emphasis role="italic"><emphasis role="bold">Analysis:</emphasis></emphasis></para>
786 <para>Information is provided about the amount and type of I/O activity is taking place on the
791 <primary>proc</primary>
792 <secondary>read/write survey</secondary>
793 </indexterm>Monitoring Client Read-Write Offset Statistics</title>
794 <para>When the <literal>offset_stats</literal> parameter is set, statistics are maintained for
795 occurrences of a series of read or write calls from a process that did not access the next
796 sequential location. The <literal>OFFSET</literal> field is reset to 0 (zero) whenever a
797 different file is read or written.</para>
799 <para>By default, statistics are not collected in the <literal>offset_stats</literal>,
800 <literal>extents_stats</literal>, and <literal>extents_stats_per_process</literal> files
801 to reduce monitoring overhead when this information is not needed. The collection of
802 statistics in all three of these files is activated by writing anything into any one of
805 <para><emphasis role="italic"><emphasis role="bold">Example:</emphasis></emphasis></para>
806 <screen># lctl get_param llite.testfs-f57dee0.offset_stats
807 snapshot_time: 1155748884.591028 (secs.usecs)
808 RANGE RANGE SMALLEST LARGEST
809 R/W PID START END EXTENT EXTENT OFFSET
810 R 8385 0 128 128 128 0
811 R 8385 0 224 224 224 -128
812 W 8385 0 250 50 100 0
813 W 8385 100 1110 10 500 -150
814 W 8384 0 5233 5233 5233 0
815 R 8385 500 600 100 100 -610</screen>
816 <para>In this example, <literal>snapshot_time</literal> is the UNIX epoch instant the file was
817 read. The tabular data is described in the table below.</para>
818 <para>The <literal>offset_stats</literal> file can be cleared by
819 entering:<screen>lctl set_param llite.*.offset_stats=0</screen></para>
820 <informaltable frame="all">
822 <colspec colname="c1" colwidth="50*"/>
823 <colspec colname="c2" colwidth="50*"/>
827 <para><emphasis role="bold">Field</emphasis></para>
830 <para><emphasis role="bold">Description</emphasis></para>
840 <para>Indicates if the non-sequential call was a read or write</para>
848 <para>Process ID of the process that made the read/write call.</para>
853 <para>RANGE START/RANGE END</para>
856 <para>Range in which the read/write calls were sequential.</para>
861 <para>SMALLEST EXTENT </para>
864 <para>Smallest single read/write in the corresponding range (in bytes).</para>
869 <para>LARGEST EXTENT </para>
872 <para>Largest single read/write in the corresponding range (in bytes).</para>
880 <para>Difference between the previous range end and the current range start.</para>
886 <para><emphasis role="italic"><emphasis role="bold">Analysis:</emphasis></emphasis></para>
887 <para>This data provides an indication of how contiguous or fragmented the data is. For
888 example, the fourth entry in the example above shows the writes for this RPC were sequential
889 in the range 100 to 1110 with the minimum write 10 bytes and the maximum write 500 bytes.
890 The range started with an offset of -150 from the <literal>RANGE END</literal> of the
891 previous entry in the example.</para>
895 <primary>proc</primary>
896 <secondary>read/write survey</secondary>
897 </indexterm>Monitoring Client Read-Write Extent Statistics</title>
898 <para>For in-depth troubleshooting, client read-write extent statistics can be accessed to
899 obtain more detail about read/write I/O extents for the file system or for a particular
902 <para>By default, statistics are not collected in the <literal>offset_stats</literal>,
903 <literal>extents_stats</literal>, and <literal>extents_stats_per_process</literal> files
904 to reduce monitoring overhead when this information is not needed. The collection of
905 statistics in all three of these files is activated by writing anything into any one of
909 <title>Client-Based I/O Extent Size Survey</title>
910 <para>The <literal>extent_stats</literal> histogram in the <literal>llite</literal>
911 directory shows the statistics for the sizes of the read/write I/O extents. This file does
912 not maintain the per-process statistics.</para>
913 <para><emphasis role="italic"><emphasis role="bold">Example:</emphasis></emphasis></para>
914 <screen># lctl get_param llite.testfs-*.extents_stats
915 snapshot_time: 1213828728.348516 (secs.usecs)
917 extents calls % cum% | calls % cum%
919 0K - 4K : 0 0 0 | 2 2 2
920 4K - 8K : 0 0 0 | 0 0 2
921 8K - 16K : 0 0 0 | 0 0 2
922 16K - 32K : 0 0 0 | 20 23 26
923 32K - 64K : 0 0 0 | 0 0 26
924 64K - 128K : 0 0 0 | 51 60 86
925 128K - 256K : 0 0 0 | 0 0 86
926 256K - 512K : 0 0 0 | 0 0 86
927 512K - 1024K : 0 0 0 | 0 0 86
928 1M - 2M : 0 0 0 | 11 13 100</screen>
929 <para>In this example, <literal>snapshot_time</literal> is the UNIX epoch instant the file
930 was read. The table shows cumulative extents organized according to size with statistics
931 provided separately for reads and writes. Each row in the table shows the number of RPCs
932 for reads and writes respectively (<literal>calls</literal>), the relative percentage of
933 total calls (<literal>%</literal>), and the cumulative percentage to that point in the
934 table of calls (<literal>cum %</literal>). </para>
935 <para> The file can be cleared by issuing the following
936 command:<screen># lctl set_param llite.testfs-*.extents_stats=0</screen></para>
939 <title>Per-Process Client I/O Statistics</title>
940 <para>The <literal>extents_stats_per_process</literal> file maintains the I/O extent size
941 statistics on a per-process basis.</para>
942 <para><emphasis role="italic"><emphasis role="bold">Example:</emphasis></emphasis></para>
943 <screen># lctl get_param llite.testfs-*.extents_stats_per_process
944 snapshot_time: 1213828762.204440 (secs.usecs)
946 extents calls % cum% | calls % cum%
949 0K - 4K : 0 0 0 | 0 0 0
950 4K - 8K : 0 0 0 | 0 0 0
951 8K - 16K : 0 0 0 | 0 0 0
952 16K - 32K : 0 0 0 | 0 0 0
953 32K - 64K : 0 0 0 | 0 0 0
954 64K - 128K : 0 0 0 | 0 0 0
955 128K - 256K : 0 0 0 | 0 0 0
956 256K - 512K : 0 0 0 | 0 0 0
957 512K - 1024K : 0 0 0 | 0 0 0
958 1M - 2M : 0 0 0 | 10 100 100
961 0K - 4K : 0 0 0 | 0 0 0
962 4K - 8K : 0 0 0 | 0 0 0
963 8K - 16K : 0 0 0 | 0 0 0
964 16K - 32K : 0 0 0 | 20 100 100
967 0K - 4K : 0 0 0 | 0 0 0
968 4K - 8K : 0 0 0 | 0 0 0
969 8K - 16K : 0 0 0 | 0 0 0
970 16K - 32K : 0 0 0 | 0 0 0
971 32K - 64K : 0 0 0 | 0 0 0
972 64K - 128K : 0 0 0 | 16 100 100
975 0K - 4K : 0 0 0 | 1 100 100
978 0K - 4K : 0 0 0 | 1 100 100
981 <para>This table shows cumulative extents organized according to size for each process ID
982 (PID) with statistics provided separately for reads and writes. Each row in the table
983 shows the number of RPCs for reads and writes respectively (<literal>calls</literal>), the
984 relative percentage of total calls (<literal>%</literal>), and the cumulative percentage
985 to that point in the table of calls (<literal>cum %</literal>). </para>
988 <section xml:id="dbdoclet.50438271_55057">
990 <primary>proc</primary>
991 <secondary>block I/O</secondary>
992 </indexterm>Monitoring the OST Block I/O Stream</title>
993 <para>The <literal>brw_stats</literal> file in the <literal>obdfilter</literal> directory
994 contains histogram data showing statistics for number of I/O requests sent to the disk,
995 their size, and whether they are contiguous on the disk or not.</para>
996 <para><emphasis role="italic"><emphasis role="bold">Example:</emphasis></emphasis></para>
997 <para>Enter on the OSS:</para>
998 <screen># lctl get_param obdfilter.testfs-OST0000.brw_stats
999 snapshot_time: 1372775039.769045 (secs.usecs)
1001 pages per bulk r/w rpcs % cum % | rpcs % cum %
1002 1: 108 100 100 | 39 0 0
1007 32: 0 0 100 | 17 0 0
1008 64: 0 0 100 | 12 0 0
1009 128: 0 0 100 | 24 0 0
1010 256: 0 0 100 | 23142 99 100
1013 discontiguous pages rpcs % cum % | rpcs % cum %
1014 0: 108 100 100 | 23245 100 100
1017 discontiguous blocks rpcs % cum % | rpcs % cum %
1018 0: 108 100 100 | 23243 99 99
1019 1: 0 0 100 | 2 0 100
1022 disk fragmented I/Os ios % cum % | ios % cum %
1024 1: 14 12 100 | 23243 99 99
1025 2: 0 0 100 | 2 0 100
1028 disk I/Os in flight ios % cum % | ios % cum %
1029 1: 14 100 100 | 20896 89 89
1030 2: 0 0 100 | 1071 4 94
1031 3: 0 0 100 | 573 2 96
1032 4: 0 0 100 | 300 1 98
1033 5: 0 0 100 | 166 0 98
1034 6: 0 0 100 | 108 0 99
1035 7: 0 0 100 | 81 0 99
1036 8: 0 0 100 | 47 0 99
1037 9: 0 0 100 | 5 0 100
1040 I/O time (1/1000s) ios % cum % | ios % cum %
1043 4: 14 12 100 | 27 0 0
1045 16: 0 0 100 | 31 0 0
1046 32: 0 0 100 | 38 0 0
1047 64: 0 0 100 | 18979 81 82
1048 128: 0 0 100 | 943 4 86
1049 256: 0 0 100 | 1233 5 91
1050 512: 0 0 100 | 1825 7 99
1051 1K: 0 0 100 | 99 0 99
1052 2K: 0 0 100 | 0 0 99
1053 4K: 0 0 100 | 0 0 99
1054 8K: 0 0 100 | 49 0 100
1057 disk I/O size ios % cum % | ios % cum %
1058 4K: 14 100 100 | 41 0 0
1060 16K: 0 0 100 | 1 0 0
1061 32K: 0 0 100 | 0 0 0
1062 64K: 0 0 100 | 4 0 0
1063 128K: 0 0 100 | 17 0 0
1064 256K: 0 0 100 | 12 0 0
1065 512K: 0 0 100 | 24 0 0
1066 1M: 0 0 100 | 23142 99 100
1068 <para>The tabular data is described in the table below. Each row in the table shows the number
1069 of reads and writes occurring for the statistic (<literal>ios</literal>), the relative
1070 percentage of total reads or writes (<literal>%</literal>), and the cumulative percentage to
1071 that point in the table for the statistic (<literal>cum %</literal>). </para>
1072 <informaltable frame="all">
1074 <colspec colname="c1" colwidth="40*"/>
1075 <colspec colname="c2" colwidth="60*"/>
1079 <para><emphasis role="bold">Field</emphasis></para>
1082 <para><emphasis role="bold">Description</emphasis></para>
1090 <literal>pages per bulk r/w</literal></para>
1093 <para>Number of pages per RPC request, which should match aggregate client
1094 <literal>rpc_stats</literal> (see <xref
1095 xmlns:xlink="http://www.w3.org/1999/xlink" linkend="MonitoringClientRCPStream"
1102 <literal>discontiguous pages</literal></para>
1105 <para>Number of discontinuities in the logical file offset of each page in a single
1112 <literal>discontiguous blocks</literal></para>
1115 <para>Number of discontinuities in the physical block allocation in the file system
1116 for a single RPC.</para>
1121 <para><literal>disk fragmented I/Os</literal></para>
1124 <para>Number of I/Os that were not written entirely sequentially.</para>
1129 <para><literal>disk I/Os in flight</literal></para>
1132 <para>Number of disk I/Os currently pending.</para>
1137 <para><literal>I/O time (1/1000s)</literal></para>
1140 <para>Amount of time for each I/O operation to complete.</para>
1145 <para><literal>disk I/O size</literal></para>
1148 <para>Size of each I/O operation.</para>
1154 <para><emphasis role="italic"><emphasis role="bold">Analysis:</emphasis></emphasis></para>
1155 <para>This data provides an indication of extent size and distribution in the file
1160 <title>Tuning Lustre File System I/O</title>
1161 <para>Each OSC has its own tree of tunables. For example:</para>
1162 <screen>$ ls -d /proc/fs/testfs/osc/OSC_client_ost1_MNT_client_2 /localhost
1163 /proc/fs/testfs/osc/OSC_uml0_ost1_MNT_localhost
1164 /proc/fs/testfs/osc/OSC_uml0_ost2_MNT_localhost
1165 /proc/fs/testfs/osc/OSC_uml0_ost3_MNT_localhost
1167 $ ls /proc/fs/testfs/osc/OSC_uml0_ost1_MNT_localhost
1168 blocksizefilesfree max_dirty_mb ost_server_uuid stats
1171 <para>The following sections describe some of the parameters that can be tuned in a Lustre file
1173 <section remap="h3" xml:id="TuningClientIORPCStream">
1175 <primary>proc</primary>
1176 <secondary>RPC tunables</secondary>
1177 </indexterm>Tuning the Client I/O RPC Stream</title>
1178 <para>Ideally, an optimal amount of data is packed into each I/O RPC and a consistent number
1179 of issued RPCs are in progress at any time. To help optimize the client I/O RPC stream,
1180 several tuning variables are provided to adjust behavior according to network conditions and
1181 cluster size. For information about monitoring the client I/O RPC stream, see <xref
1182 xmlns:xlink="http://www.w3.org/1999/xlink" linkend="MonitoringClientRCPStream"/>.</para>
1183 <para>RPC stream tunables include:</para>
1187 <para><literal>osc.<replaceable>osc_instance</replaceable>.max_dirty_mb</literal> -
1188 Controls how many MBs of dirty data can be written and queued up in the OSC. POSIX
1189 file writes that are cached contribute to this count. When the limit is reached,
1190 additional writes stall until previously-cached writes are written to the server. This
1191 may be changed by writing a single ASCII integer to the file. Only values between 0
1192 and 2048 or 1/4 of RAM are allowable. If 0 is specified, no writes are cached.
1193 Performance suffers noticeably unless you use large writes (1 MB or more).</para>
1194 <para>To maximize performance, the value for <literal>max_dirty_mb</literal> is
1195 recommended to be 4 * <literal>max_pages_per_rpc </literal>*
1196 <literal>max_rpcs_in_flight</literal>.</para>
1199 <para><literal>osc.<replaceable>osc_instance</replaceable>.cur_dirty_bytes</literal> - A
1200 read-only value that returns the current number of bytes written and cached on this
1204 <para><literal>osc.<replaceable>osc_instance</replaceable>.max_pages_per_rpc</literal> -
1205 The maximum number of pages that will undergo I/O in a single RPC to the OST. The
1206 minimum setting is a single page and the maximum setting is 1024 (for systems with a
1207 <literal>PAGE_SIZE</literal> of 4 KB), with the default maximum of 1 MB in the RPC.
1208 It is also possible to specify a units suffix (e.g. <literal>4M</literal>), so that
1209 the RPC size can be specified independently of the client
1210 <literal>PAGE_SIZE</literal>.</para>
1213 <para><literal>osc.<replaceable>osc_instance</replaceable>.max_rpcs_in_flight</literal>
1214 - The maximum number of concurrent RPCs in flight from an OSC to its OST. If the OSC
1215 tries to initiate an RPC but finds that it already has the same number of RPCs
1216 outstanding, it will wait to issue further RPCs until some complete. The minimum
1217 setting is 1 and maximum setting is 256. </para>
1218 <para>To improve small file I/O performance, increase the
1219 <literal>max_rpcs_in_flight</literal> value.</para>
1222 <para><literal>llite.<replaceable>fsname-instance</replaceable>/max_cache_mb</literal> -
1223 Maximum amount of inactive data cached by the client (default is 3/4 of RAM). For
1225 <screen># lctl get_param llite.testfs-ce63ca00.max_cached_mb
1231 <para>The value for <literal><replaceable>osc_instance</replaceable></literal> is typically
1232 <literal><replaceable>fsname</replaceable>-OST<replaceable>ost_index</replaceable>-osc-<replaceable>mountpoint_instance</replaceable></literal>,
1233 where the value for <literal><replaceable>mountpoint_instance</replaceable></literal> is
1234 unique to each mount point to allow associating osc, mdc, lov, lmv, and llite parameters
1235 with the same mount point. For
1236 example:<screen>lctl get_param osc.testfs-OST0000-osc-ffff88107412f400.rpc_stats
1237 osc.testfs-OST0000-osc-ffff88107412f400.rpc_stats=
1238 snapshot_time: 1375743284.337839 (secs.usecs)
1239 read RPCs in flight: 0
1240 write RPCs in flight: 0
1244 <section remap="h3">
1246 <primary>proc</primary>
1247 <secondary>readahead</secondary>
1248 </indexterm>Tuning File Readahead and Directory Statahead</title>
1249 <para>File readahead and directory statahead enable reading of data into memory before a
1250 process requests the data. File readahead reads file content data into memory and directory
1251 statahead reads metadata into memory. When readahead and statahead work well, a process that
1252 accesses data finds that the information it needs is available immediately when requested in
1253 memory without the delay of network I/O.</para>
1254 <para condition="l22">In Lustre software release 2.2.0, the directory statahead feature was
1255 improved to enhance directory traversal performance. The improvements primarily addressed
1256 two issues: <orderedlist>
1258 <para>A race condition existed between the statahead thread and other VFS operations
1259 while processing asynchronous <literal>getattr</literal> RPC replies, causing
1260 duplicate entries in dcache. This issue was resolved by using statahead local dcache.
1264 <para>File size/block attributes pre-fetching was not supported, so the traversing
1265 thread had to send synchronous glimpse size RPCs to OST(s). This issue was resolved by
1266 using asynchronous glimpse lock (AGL) RPCs to pre-fetch file size/block attributes
1271 <section remap="h4">
1272 <title>Tuning File Readahead</title>
1273 <para>File readahead is triggered when two or more sequential reads by an application fail
1274 to be satisfied by data in the Linux buffer cache. The size of the initial readahead is 1
1275 MB. Additional readaheads grow linearly and increment until the readahead cache on the
1276 client is full at 40 MB.</para>
1277 <para>Readahead tunables include:</para>
1280 <para><literal>llite.<replaceable>fsname-instance</replaceable>.max_read_ahead_mb</literal>
1281 - Controls the maximum amount of data readahead on a file. Files are read ahead in
1282 RPC-sized chunks (1 MB or the size of the <literal>read()</literal> call, if larger)
1283 after the second sequential read on a file descriptor. Random reads are done at the
1284 size of the <literal>read()</literal> call only (no readahead). Reads to
1285 non-contiguous regions of the file reset the readahead algorithm, and readahead is not
1286 triggered again until sequential reads take place again. </para>
1287 <para>To disable readahead, set this tunable to 0. The default value is 40 MB.</para>
1290 <para><literal>llite.<replaceable>fsname-instance</replaceable>.max_read_ahead_whole_mb</literal>
1291 - Controls the maximum size of a file that is read in its entirety, regardless of the
1292 size of the <literal>read()</literal>.</para>
1297 <title>Tuning Directory Statahead and AGL</title>
1298 <para>Many system commands, such as <literal>ls –l</literal>, <literal>du</literal>, and
1299 <literal>find</literal>, traverse a directory sequentially. To make these commands run
1300 efficiently, the directory statahead and asynchronous glimpse lock (AGL) can be enabled to
1301 improve the performance of traversing.</para>
1302 <para>The statahead tunables are:</para>
1305 <para><literal>statahead_max</literal> - Controls whether directory statahead is enabled
1306 and the maximum statahead window size (i.e., how many files can be pre-fetched by the
1307 statahead thread). By default, statahead is enabled and the value of
1308 <literal>statahead_max</literal> is 32.</para>
1309 <para>To disable statahead, run:</para>
1310 <screen>lctl set_param llite.*.statahead_max=0</screen>
1311 <para>To set the maximum statahead window size (<replaceable>n</replaceable>),
1313 <screen>lctl set_param llite.*.statahead_max=<replaceable>n</replaceable></screen>
1314 <para>The maximum value of <replaceable>n</replaceable> is 8192.</para>
1315 <para>The AGL can be controlled by entering:</para>
1316 <screen>lctl set_param llite.*.statahead_agl=<replaceable>n</replaceable></screen>
1317 <para>The default value for <replaceable>n</replaceable> is 1, which enables the AGL. If
1318 <replaceable>n</replaceable> is 0, the AGL is disabled.</para>
1321 <para><literal>statahead_stats</literal> - A read-only interface that indicates the
1322 current statahead and AGL statistics, such as how many times statahead/AGL has been
1323 triggered since the last mount, how many statahead/AGL failures have occurred due to
1324 an incorrect prediction or other causes.</para>
1326 <para>The AGL is affected by statahead because the inodes processed by AGL are built
1327 by the statahead thread, which means the statahead thread is the input of the AGL
1328 pipeline. So if statahead is disabled, then the AGL is disabled by force.</para>
1334 <section remap="h3">
1336 <primary>proc</primary>
1337 <secondary>read cache</secondary>
1338 </indexterm>Tuning OSS Read Cache</title>
1339 <para>The OSS read cache feature provides read-only caching of data on an OSS. This
1340 functionality uses the Linux page cache to store the data and uses as much physical memory
1341 as is allocated.</para>
1342 <para>OSS read cache improves Lustre file system performance in these situations:</para>
1345 <para>Many clients are accessing the same data set (as in HPC applications or when
1346 diskless clients boot from the Lustre file system).</para>
1349 <para>One client is storing data while another client is reading it (i.e., clients are
1350 exchanging data via the OST).</para>
1353 <para>A client has very limited caching of its own.</para>
1356 <para>OSS read cache offers these benefits:</para>
1359 <para>Allows OSTs to cache read data more frequently.</para>
1362 <para>Improves repeated reads to match network speeds instead of disk speeds.</para>
1365 <para>Provides the building blocks for OST write cache (small-write aggregation).</para>
1368 <section remap="h4">
1369 <title>Using OSS Read Cache</title>
1370 <para>OSS read cache is implemented on the OSS, and does not require any special support on
1371 the client side. Since OSS read cache uses the memory available in the Linux page cache,
1372 the appropriate amount of memory for the cache should be determined based on I/O patterns;
1373 if the data is mostly reads, then more cache is required than would be needed for mostly
1375 <para>OSS read cache is managed using the following tunables:</para>
1378 <para><literal>read_cache_enable</literal> - Controls whether data read from disk during
1379 a read request is kept in memory and available for later read requests for the same
1380 data, without having to re-read it from disk. By default, read cache is enabled
1381 (<literal>read_cache_enable=1</literal>).</para>
1382 <para>When the OSS receives a read request from a client, it reads data from disk into
1383 its memory and sends the data as a reply to the request. If read cache is enabled,
1384 this data stays in memory after the request from the client has been fulfilled. When
1385 subsequent read requests for the same data are received, the OSS skips reading data
1386 from disk and the request is fulfilled from the cached data. The read cache is managed
1387 by the Linux kernel globally across all OSTs on that OSS so that the least recently
1388 used cache pages are dropped from memory when the amount of free memory is running
1390 <para>If read cache is disabled (<literal>read_cache_enable=0</literal>), the OSS
1391 discards the data after a read request from the client is serviced and, for subsequent
1392 read requests, the OSS again reads the data from disk.</para>
1393 <para>To disable read cache on all the OSTs of an OSS, run:</para>
1394 <screen>root@oss1# lctl set_param obdfilter.*.read_cache_enable=0</screen>
1395 <para>To re-enable read cache on one OST, run:</para>
1396 <screen>root@oss1# lctl set_param obdfilter.{OST_name}.read_cache_enable=1</screen>
1397 <para>To check if read cache is enabled on all OSTs on an OSS, run:</para>
1398 <screen>root@oss1# lctl get_param obdfilter.*.read_cache_enable</screen>
1401 <para><literal>writethrough_cache_enable</literal> - Controls whether data sent to the
1402 OSS as a write request is kept in the read cache and available for later reads, or if
1403 it is discarded from cache when the write is completed. By default, the writethrough
1404 cache is enabled (<literal>writethrough_cache_enable=1</literal>).</para>
1405 <para>When the OSS receives write requests from a client, it receives data from the
1406 client into its memory and writes the data to disk. If the writethrough cache is
1407 enabled, this data stays in memory after the write request is completed, allowing the
1408 OSS to skip reading this data from disk if a later read request, or partial-page write
1409 request, for the same data is received.</para>
1410 <para>If the writethrough cache is disabled
1411 (<literal>writethrough_cache_enabled=0</literal>), the OSS discards the data after
1412 the write request from the client is completed. For subsequent read requests, or
1413 partial-page write requests, the OSS must re-read the data from disk.</para>
1414 <para>Enabling writethrough cache is advisable if clients are doing small or unaligned
1415 writes that would cause partial-page updates, or if the files written by one node are
1416 immediately being accessed by other nodes. Some examples where enabling writethrough
1417 cache might be useful include producer-consumer I/O models or shared-file writes with
1418 a different node doing I/O not aligned on 4096-byte boundaries. </para>
1419 <para>Disabling the writethrough cache is advisable when files are mostly written to the
1420 file system but are not re-read within a short time period, or files are only written
1421 and re-read by the same node, regardless of whether the I/O is aligned or not.</para>
1422 <para>To disable the writethrough cache on all OSTs of an OSS, run:</para>
1423 <screen>root@oss1# lctl set_param obdfilter.*.writethrough_cache_enable=0</screen>
1424 <para>To re-enable the writethrough cache on one OST, run:</para>
1425 <screen>root@oss1# lctl set_param obdfilter.{OST_name}.writethrough_cache_enable=1</screen>
1426 <para>To check if the writethrough cache is enabled, run:</para>
1427 <screen>root@oss1# lctl get_param obdfilter.*.writethrough_cache_enable</screen>
1430 <para><literal>readcache_max_filesize</literal> - Controls the maximum size of a file
1431 that both the read cache and writethrough cache will try to keep in memory. Files
1432 larger than <literal>readcache_max_filesize</literal> will not be kept in cache for
1433 either reads or writes.</para>
1434 <para>Setting this tunable can be useful for workloads where relatively small files are
1435 repeatedly accessed by many clients, such as job startup files, executables, log
1436 files, etc., but large files are read or written only once. By not putting the larger
1437 files into the cache, it is much more likely that more of the smaller files will
1438 remain in cache for a longer time.</para>
1439 <para>When setting <literal>readcache_max_filesize</literal>, the input value can be
1440 specified in bytes, or can have a suffix to indicate other binary units such as
1441 <literal>K</literal> (kilobytes), <literal>M</literal> (megabytes),
1442 <literal>G</literal> (gigabytes), <literal>T</literal> (terabytes), or
1443 <literal>P</literal> (petabytes).</para>
1444 <para>To limit the maximum cached file size to 32 MB on all OSTs of an OSS, run:</para>
1445 <screen>root@oss1# lctl set_param obdfilter.*.readcache_max_filesize=32M</screen>
1446 <para>To disable the maximum cached file size on an OST, run:</para>
1447 <screen>root@oss1# lctl set_param obdfilter.{OST_name}.readcache_max_filesize=-1</screen>
1448 <para>To check the current maximum cached file size on all OSTs of an OSS, run:</para>
1449 <screen>root@oss1# lctl get_param obdfilter.*.readcache_max_filesize</screen>
1456 <primary>proc</primary>
1457 <secondary>OSS journal</secondary>
1458 </indexterm>Enabling OSS Asynchronous Journal Commit</title>
1459 <para>The OSS asynchronous journal commit feature asynchronously writes data to disk without
1460 forcing a journal flush. This reduces the number of seeks and significantly improves
1461 performance on some hardware.</para>
1463 <para>Asynchronous journal commit cannot work with direct I/O-originated writes
1464 (<literal>O_DIRECT</literal> flag set). In this case, a journal flush is forced. </para>
1466 <para>When the asynchronous journal commit feature is enabled, client nodes keep data in the
1467 page cache (a page reference). Lustre clients monitor the last committed transaction number
1468 (<literal>transno</literal>) in messages sent from the OSS to the clients. When a client
1469 sees that the last committed <literal>transno</literal> reported by the OSS is at least
1470 equal to the bulk write <literal>transno</literal>, it releases the reference on the
1471 corresponding pages. To avoid page references being held for too long on clients after a
1472 bulk write, a 7 second ping request is scheduled (the default OSS file system commit time
1473 interval is 5 seconds) after the bulk write reply is received, so the OSS has an opportunity
1474 to report the last committed <literal>transno</literal>.</para>
1475 <para>If the OSS crashes before the journal commit occurs, then intermediate data is lost.
1476 However, OSS recovery functionality incorporated into the asynchronous journal commit
1477 feature causes clients to replay their write requests and compensate for the missing disk
1478 updates by restoring the state of the file system.</para>
1479 <para>By default, <literal>sync_journal</literal> is enabled
1480 (<literal>sync_journal=1</literal>), so that journal entries are committed synchronously.
1481 To enable asynchronous journal commit, set the <literal>sync_journal</literal> parameter to
1482 <literal>0</literal> by entering: </para>
1483 <screen>$ lctl set_param obdfilter.*.sync_journal=0
1484 obdfilter.lol-OST0001.sync_journal=0</screen>
1485 <para>An associated <literal>sync-on-lock-cancel</literal> feature (enabled by default)
1486 addresses a data consistency issue that can result if an OSS crashes after multiple clients
1487 have written data into intersecting regions of an object, and then one of the clients also
1488 crashes. A condition is created in which the POSIX requirement for continuous writes is
1489 violated along with a potential for corrupted data. With
1490 <literal>sync-on-lock-cancel</literal> enabled, if a cancelled lock has any volatile
1491 writes attached to it, the OSS synchronously writes the journal to disk on lock
1492 cancellation. Disabling the <literal>sync-on-lock-cancel</literal> feature may enhance
1493 performance for concurrent write workloads, but it is recommended that you not disable this
1495 <para> The <literal>sync_on_lock_cancel</literal> parameter can be set to the following
1499 <para><literal>always</literal> - Always force a journal flush on lock cancellation
1500 (default when <literal>async_journal</literal> is enabled).</para>
1503 <para><literal>blocking</literal> - Force a journal flush only when the local cancellation
1504 is due to a blocking callback.</para>
1507 <para><literal>never</literal> - Do not force any journal flush (default when
1508 <literal>async_journal</literal> is disabled).</para>
1511 <para>For example, to set <literal>sync_on_lock_cancel</literal> to not to force a journal
1512 flush, use a command similar to:</para>
1513 <screen>$ lctl get_param obdfilter.*.sync_on_lock_cancel
1514 obdfilter.lol-OST0001.sync_on_lock_cancel=never</screen>
1516 <section xml:id="dbdoclet.TuningModRPCs" condition='l28'>
1519 <primary>proc</primary>
1520 <secondary>client metadata performance</secondary>
1522 Tuning the Client Metadata RPC Stream
1524 <para>The client metadata RPC stream represents the metadata RPCs issued
1525 in parallel by a client to a MDT target. The metadata RPCs can be split
1526 in two categories: the requests that do not modify the file system
1527 (like getattr operation), and the requests that do modify the file system
1528 (like create, unlink, setattr operations). To help optimize the client
1529 metadata RPC stream, several tuning variables are provided to adjust
1530 behavior according to network conditions and cluster size.</para>
1531 <para>Note that increasing the number of metadata RPCs issued in parallel
1532 might improve the performance metadata intensive parallel applications,
1533 but as a consequence it will consume more memory on the client and on
1536 <title>Configuring the Client Metadata RPC Stream</title>
1537 <para>The MDC <literal>max_rpcs_in_flight</literal> parameter defines
1538 the maximum number of metadata RPCs, both modifying and
1539 non-modifying RPCs, that can be sent in parallel by a client to a MDT
1540 target. This includes every file system metadata operations, such as
1541 file or directory stat, creation, unlink. The default setting is 8,
1542 minimum setting is 1 and maximum setting is 256.</para>
1543 <para>To set the <literal>max_rpcs_in_flight</literal> parameter, run
1544 the following command on the Lustre client:</para>
1545 <screen>client$ lctl set_param mdc.*.max_rcps_in_flight=16</screen>
1546 <para>The MDC <literal>max_mod_rpcs_in_flight</literal> parameter
1547 defines the maximum number of file system modifying RPCs that can be
1548 sent in parallel by a client to a MDT target. For example, the Lustre
1549 client sends modify RPCs when it performs file or directory creation,
1550 unlink, access permission modification or ownership modification. The
1551 default setting is 7, minimum setting is 1 and maximum setting is
1553 <para>To set the <literal>max_mod_rpcs_in_flight</literal> parameter,
1554 run the following command on the Lustre client:</para>
1555 <screen>client$ lctl set_param mdc.*.max_mod_rcps_in_flight=12</screen>
1556 <para>The <literal>max_mod_rpcs_in_flight</literal> value must be
1557 strictly less than the <literal>max_rpcs_in_flight</literal> value.
1558 It must also be less or equal to the MDT
1559 <literal>max_mod_rpcs_per_client</literal> value. If one of theses
1560 conditions is not enforced, the setting fails and an explicit message
1561 is written in the Lustre log.</para>
1562 <para>The MDT <literal>max_mod_rpcs_per_client</literal> parameter is a
1563 tunable of the kernel module <literal>mdt</literal> that defines the
1564 maximum number of file system modifying RPCs in flight allowed per
1565 client. The parameter can be updated at runtime, but the change is
1566 effective to new client connections only. The default setting is 8.
1568 <para>To set the <literal>max_mod_rpcs_per_client</literal> parameter,
1569 run the following command on the MDS:</para>
1570 <screen>mds$ echo 12 > /sys/module/mdt/parameters/max_mod_rpcs_per_client</screen>
1573 <title>Monitoring the Client Metadata RPC Stream</title>
1574 <para>The <literal>rpc_stats</literal> file contains histogram data
1575 showing information about modify metadata RPCs. It can be helpful to
1576 identify the level of parallelism achieved by an application doing
1577 modify metadata operations.</para>
1578 <para><emphasis role="bold">Example:</emphasis></para>
1579 <screen>client$ lctl get_param mdc.*.rpc_stats
1580 snapshot_time: 1441876896.567070 (secs.usecs)
1581 modify_RPCs_in_flight: 0
1584 rpcs in flight rpcs % cum %
1597 12: 4540 18 100</screen>
1598 <para>The file information includes:</para>
1601 <para><literal>snapshot_time</literal> - UNIX epoch instant the
1602 file was read.</para>
1605 <para><literal>modify_RPCs_in_flight</literal> - Number of modify
1606 RPCs issued by the MDC, but not completed at the time of the
1607 snapshot. This value should always be less than or equal to
1608 <literal>max_mod_rpcs_in_flight</literal>.</para>
1611 <para><literal>rpcs in flight</literal> - Number of modify RPCs
1612 that are pending when a RPC is sent, the relative percentage
1613 (<literal>%</literal>) of total modify RPCs, and the cumulative
1614 percentage (<literal>cum %</literal>) to that point.</para>
1617 <para>If a large proportion of modify metadata RPCs are issued with a
1618 number of pending metadata RPCs close to the
1619 <literal>max_mod_rpcs_in_flight</literal> value, it means the
1620 <literal>max_mod_rpcs_in_flight</literal> value could be increased to
1621 improve the modify metadata performance.</para>
1626 <title>Configuring Timeouts in a Lustre File System</title>
1627 <para>In a Lustre file system, RPC timeouts are set using an adaptive timeouts mechanism, which
1628 is enabled by default. Servers track RPC completion times and then report back to clients
1629 estimates for completion times for future RPCs. Clients use these estimates to set RPC
1630 timeout values. If the processing of server requests slows down for any reason, the server
1631 estimates for RPC completion increase, and clients then revise RPC timeout values to allow
1632 more time for RPC completion.</para>
1633 <para>If the RPCs queued on the server approach the RPC timeout specified by the client, to
1634 avoid RPC timeouts and disconnect/reconnect cycles, the server sends an "early reply" to the
1635 client, telling the client to allow more time. Conversely, as server processing speeds up, RPC
1636 timeout values decrease, resulting in faster detection if the server becomes non-responsive
1637 and quicker connection to the failover partner of the server.</para>
1640 <primary>proc</primary>
1641 <secondary>configuring adaptive timeouts</secondary>
1642 </indexterm><indexterm>
1643 <primary>configuring</primary>
1644 <secondary>adaptive timeouts</secondary>
1645 </indexterm><indexterm>
1646 <primary>proc</primary>
1647 <secondary>adaptive timeouts</secondary>
1648 </indexterm>Configuring Adaptive Timeouts</title>
1649 <para>The adaptive timeout parameters in the table below can be set persistently system-wide
1650 using <literal>lctl conf_param</literal> on the MGS. For example, the following command sets
1651 the <literal>at_max</literal> value for all servers and clients associated with the file
1653 <literal>testfs</literal>:<screen>lctl conf_param testfs.sys.at_max=1500</screen></para>
1655 <para>Clients that access multiple Lustre file systems must use the same parameter values
1656 for all file systems.</para>
1658 <informaltable frame="all">
1660 <colspec colname="c1" colwidth="30*"/>
1661 <colspec colname="c2" colwidth="80*"/>
1665 <para><emphasis role="bold">Parameter</emphasis></para>
1668 <para><emphasis role="bold">Description</emphasis></para>
1676 <literal> at_min </literal></para>
1679 <para>Minimum adaptive timeout (in seconds). The default value is 0. The
1680 <literal>at_min</literal> parameter is the minimum processing time that a server
1681 will report. Ideally, <literal>at_min</literal> should be set to its default
1682 value. Clients base their timeouts on this value, but they do not use this value
1684 <para>If, for unknown reasons (usually due to temporary network outages), the
1685 adaptive timeout value is too short and clients time out their RPCs, you can
1686 increase the <literal>at_min</literal> value to compensate for this.</para>
1692 <literal> at_max </literal></para>
1695 <para>Maximum adaptive timeout (in seconds). The <literal>at_max</literal> parameter
1696 is an upper-limit on the service time estimate. If <literal>at_max</literal> is
1697 reached, an RPC request times out.</para>
1698 <para>Setting <literal>at_max</literal> to 0 causes adaptive timeouts to be disabled
1699 and a fixed timeout method to be used instead (see <xref
1700 xmlns:xlink="http://www.w3.org/1999/xlink" linkend="section_c24_nt5_dl"/></para>
1702 <para>If slow hardware causes the service estimate to increase beyond the default
1703 value of <literal>at_max</literal>, increase <literal>at_max</literal> to the
1704 maximum time you are willing to wait for an RPC completion.</para>
1711 <literal> at_history </literal></para>
1714 <para>Time period (in seconds) within which adaptive timeouts remember the slowest
1715 event that occurred. The default is 600.</para>
1721 <literal> at_early_margin </literal></para>
1724 <para>Amount of time before the Lustre server sends an early reply (in seconds).
1725 Default is 5.</para>
1731 <literal> at_extra </literal></para>
1734 <para>Incremental amount of time that a server requests with each early reply (in
1735 seconds). The server does not know how much time the RPC will take, so it asks for
1736 a fixed value. The default is 30, which provides a balance between sending too
1737 many early replies for the same RPC and overestimating the actual completion
1739 <para>When a server finds a queued request about to time out and needs to send an
1740 early reply out, the server adds the <literal>at_extra</literal> value. If the
1741 time expires, the Lustre server drops the request, and the client enters recovery
1742 status and reconnects to restore the connection to normal status.</para>
1743 <para>If you see multiple early replies for the same RPC asking for 30-second
1744 increases, change the <literal>at_extra</literal> value to a larger number to cut
1745 down on early replies sent and, therefore, network load.</para>
1751 <literal> ldlm_enqueue_min </literal></para>
1754 <para>Minimum lock enqueue time (in seconds). The default is 100. The time it takes
1755 to enqueue a lock, <literal>ldlm_enqueue</literal>, is the maximum of the measured
1756 enqueue estimate (influenced by <literal>at_min</literal> and
1757 <literal>at_max</literal> parameters), multiplied by a weighting factor and the
1758 value of <literal>ldlm_enqueue_min</literal>. </para>
1759 <para>Lustre Distributed Lock Manager (LDLM) lock enqueues have a dedicated minimum
1760 value for <literal>ldlm_enqueue_min</literal>. Lock enqueue timeouts increase as
1761 the measured enqueue times increase (similar to adaptive timeouts).</para>
1768 <title>Interpreting Adaptive Timeout Information</title>
1769 <para>Adaptive timeout information can be obtained from the <literal>timeouts</literal>
1770 files in <literal>/proc/fs/lustre/*/</literal> on each server and client using the
1771 <literal>lctl</literal> command. To read information from a <literal>timeouts</literal>
1772 file, enter a command similar to:</para>
1773 <screen># lctl get_param -n ost.*.ost_io.timeouts
1774 service : cur 33 worst 34 (at 1193427052, 0d0h26m40s ago) 1 1 33 2</screen>
1775 <para>In this example, the <literal>ost_io</literal> service on this node is currently
1776 reporting an estimated RPC service time of 33 seconds. The worst RPC service time was 34
1777 seconds, which occurred 26 minutes ago.</para>
1778 <para>The output also provides a history of service times. Four "bins" of adaptive
1779 timeout history are shown, with the maximum RPC time in each bin reported. In both the
1780 0-150s bin and the 150-300s bin, the maximum RPC time was 1. The 300-450s bin shows the
1781 worst (maximum) RPC time at 33 seconds, and the 450-600s bin shows a maximum of RPC time
1782 of 2 seconds. The estimated service time is the maximum value across the four bins (33
1783 seconds in this example).</para>
1784 <para>Service times (as reported by the servers) are also tracked in the client OBDs, as
1785 shown in this example:</para>
1786 <screen># lctl get_param osc.*.timeouts
1787 last reply : 1193428639, 0d0h00m00s ago
1788 network : cur 1 worst 2 (at 1193427053, 0d0h26m26s ago) 1 1 1 1
1789 portal 6 : cur 33 worst 34 (at 1193427052, 0d0h26m27s ago) 33 33 33 2
1790 portal 28 : cur 1 worst 1 (at 1193426141, 0d0h41m38s ago) 1 1 1 1
1791 portal 7 : cur 1 worst 1 (at 1193426141, 0d0h41m38s ago) 1 0 1 1
1792 portal 17 : cur 1 worst 1 (at 1193426177, 0d0h41m02s ago) 1 0 0 1
1794 <para>In this example, portal 6, the <literal>ost_io</literal> service portal, shows the
1795 history of service estimates reported by the portal.</para>
1796 <para>Server statistic files also show the range of estimates including min, max, sum, and
1797 sumsq. For example:</para>
1798 <screen># lctl get_param mdt.*.mdt.stats
1800 req_timeout 6 samples [sec] 1 10 15 105
1805 <section xml:id="section_c24_nt5_dl">
1806 <title>Setting Static Timeouts<indexterm>
1807 <primary>proc</primary>
1808 <secondary>static timeouts</secondary>
1809 </indexterm></title>
1810 <para>The Lustre software provides two sets of static (fixed) timeouts, LND timeouts and
1811 Lustre timeouts, which are used when adaptive timeouts are not enabled.</para>
1815 <para><emphasis role="italic"><emphasis role="bold">LND timeouts</emphasis></emphasis> -
1816 LND timeouts ensure that point-to-point communications across a network complete in a
1817 finite time in the presence of failures, such as packages lost or broken connections.
1818 LND timeout parameters are set for each individual LND.</para>
1819 <para>LND timeouts are logged with the <literal>S_LND</literal> flag set. They are not
1820 printed as console messages, so check the Lustre log for <literal>D_NETERROR</literal>
1821 messages or enable printing of <literal>D_NETERROR</literal> messages to the console
1822 using:<screen>lctl set_param printk=+neterror</screen></para>
1823 <para>Congested routers can be a source of spurious LND timeouts. To avoid this
1824 situation, increase the number of LNET router buffers to reduce back-pressure and/or
1825 increase LND timeouts on all nodes on all connected networks. Also consider increasing
1826 the total number of LNET router nodes in the system so that the aggregate router
1827 bandwidth matches the aggregate server bandwidth.</para>
1830 <para><emphasis role="italic"><emphasis role="bold">Lustre timeouts
1831 </emphasis></emphasis>- Lustre timeouts ensure that Lustre RPCs complete in a finite
1832 time in the presence of failures when adaptive timeouts are not enabled. Adaptive
1833 timeouts are enabled by default. To disable adaptive timeouts at run time, set
1834 <literal>at_max</literal> to 0 by running on the
1835 MGS:<screen># lctl conf_param <replaceable>fsname</replaceable>.sys.at_max=0</screen></para>
1837 <para>Changing the status of adaptive timeouts at runtime may cause a transient client
1838 timeout, recovery, and reconnection.</para>
1840 <para>Lustre timeouts are always printed as console messages. </para>
1841 <para>If Lustre timeouts are not accompanied by LND timeouts, increase the Lustre
1842 timeout on both servers and clients. Lustre timeouts are set using a command such as
1843 the following:<screen># lctl set_param timeout=30</screen></para>
1844 <para>Lustre timeout parameters are described in the table below.</para>
1847 <informaltable frame="all">
1849 <colspec colname="c1" colnum="1" colwidth="30*"/>
1850 <colspec colname="c2" colnum="2" colwidth="70*"/>
1853 <entry>Parameter</entry>
1854 <entry>Description</entry>
1859 <entry><literal>timeout</literal></entry>
1861 <para>The time that a client waits for a server to complete an RPC (default 100s).
1862 Servers wait half this time for a normal client RPC to complete and a quarter of
1863 this time for a single bulk request (read or write of up to 4 MB) to complete.
1864 The client pings recoverable targets (MDS and OSTs) at one quarter of the
1865 timeout, and the server waits one and a half times the timeout before evicting a
1866 client for being "stale."</para>
1867 <para>Lustre client sends periodic 'ping' messages to servers with which
1868 it has had no communication for the specified period of time. Any network
1869 activity between a client and a server in the file system also serves as a
1874 <entry><literal>ldlm_timeout</literal></entry>
1876 <para>The time that a server waits for a client to reply to an initial AST (lock
1877 cancellation request). The default is 20s for an OST and 6s for an MDS. If the
1878 client replies to the AST, the server will give it a normal timeout (half the
1879 client timeout) to flush any dirty data and release the lock.</para>
1883 <entry><literal>fail_loc</literal></entry>
1885 <para>An internal debugging failure hook. The default value of
1886 <literal>0</literal> means that no failure will be triggered or
1891 <entry><literal>dump_on_timeout</literal></entry>
1893 <para>Triggers a dump of the Lustre debug log when a timeout occurs. The default
1894 value of <literal>0</literal> (zero) means a dump of the Lustre debug log will
1895 not be triggered.</para>
1899 <entry><literal>dump_on_eviction</literal></entry>
1901 <para>Triggers a dump of the Lustre debug log when an eviction occurs. The default
1902 value of <literal>0</literal> (zero) means a dump of the Lustre debug log will
1903 not be triggered. </para>
1912 <section remap="h3">
1914 <primary>proc</primary>
1915 <secondary>LNET</secondary>
1916 </indexterm><indexterm>
1917 <primary>LNET</primary>
1918 <secondary>proc</secondary>
1919 </indexterm>Monitoring LNET</title>
1920 <para>LNET information is located in <literal>/proc/sys/lnet</literal> in these files:<itemizedlist>
1922 <para><literal>peers</literal> - Shows all NIDs known to this node and provides
1923 information on the queue state.</para>
1924 <para>Example:</para>
1925 <screen># lctl get_param peers
1926 nid refs state max rtr min tx min queue
1927 0@lo 1 ~rtr 0 0 0 0 0 0
1928 192.168.10.35@tcp 1 ~rtr 8 8 8 8 6 0
1929 192.168.10.36@tcp 1 ~rtr 8 8 8 8 6 0
1930 192.168.10.37@tcp 1 ~rtr 8 8 8 8 6 0</screen>
1931 <para>The fields are explained in the table below:</para>
1932 <informaltable frame="all">
1934 <colspec colname="c1" colwidth="30*"/>
1935 <colspec colname="c2" colwidth="80*"/>
1939 <para><emphasis role="bold">Field</emphasis></para>
1942 <para><emphasis role="bold">Description</emphasis></para>
1950 <literal>refs</literal>
1954 <para>A reference count. </para>
1960 <literal>state</literal>
1964 <para>If the node is a router, indicates the state of the router. Possible
1968 <para><literal>NA</literal> - Indicates the node is not a router.</para>
1971 <para><literal>up/down</literal>- Indicates if the node (router) is up or
1980 <literal>max </literal></para>
1983 <para>Maximum number of concurrent sends from this peer.</para>
1989 <literal>rtr </literal></para>
1992 <para>Number of routing buffer credits.</para>
1998 <literal>min </literal></para>
2001 <para>Minimum number of routing buffer credits seen.</para>
2007 <literal>tx </literal></para>
2010 <para>Number of send credits.</para>
2016 <literal>min </literal></para>
2019 <para>Minimum number of send credits seen.</para>
2025 <literal>queue </literal></para>
2028 <para>Total bytes in active/queued sends.</para>
2034 <para>Credits are initialized to allow a certain number of operations (in the example
2035 above the table, eight as shown in the <literal>max</literal> column. LNET keeps track
2036 of the minimum number of credits ever seen over time showing the peak congestion that
2037 has occurred during the time monitored. Fewer available credits indicates a more
2038 congested resource. </para>
2039 <para>The number of credits currently in flight (number of transmit credits) is shown in
2040 the <literal>tx</literal> column. The maximum number of send credits available is shown
2041 in the <literal>max</literal> column and never changes. The number of router buffers
2042 available for consumption by a peer is shown in the <literal>rtr</literal>
2044 <para>Therefore, <literal>rtr</literal> – <literal>tx</literal> is the number of transmits
2045 in flight. Typically, <literal>rtr == max</literal>, although a configuration can be set
2046 such that <literal>max >= rtr</literal>. The ratio of routing buffer credits to send
2047 credits (<literal>rtr/tx</literal>) that is less than <literal>max</literal> indicates
2048 operations are in progress. If the ratio <literal>rtr/tx</literal> is greater than
2049 <literal>max</literal>, operations are blocking.</para>
2050 <para>LNET also limits concurrent sends and number of router buffers allocated to a single
2051 peer so that no peer can occupy all these resources.</para>
2054 <para><literal>nis</literal> - Shows the current queue health on this node.</para>
2055 <para>Example:</para>
2056 <screen># lctl get_param nis
2057 nid refs peer max tx min
2059 192.168.10.34@tcp 4 8 256 256 252
2061 <para> The fields are explained in the table below.</para>
2062 <informaltable frame="all">
2064 <colspec colname="c1" colwidth="30*"/>
2065 <colspec colname="c2" colwidth="80*"/>
2069 <para><emphasis role="bold">Field</emphasis></para>
2072 <para><emphasis role="bold">Description</emphasis></para>
2080 <literal> nid </literal></para>
2083 <para>Network interface.</para>
2089 <literal> refs </literal></para>
2092 <para>Internal reference counter.</para>
2098 <literal> peer </literal></para>
2101 <para>Number of peer-to-peer send credits on this NID. Credits are used to size
2102 buffer pools.</para>
2108 <literal> max </literal></para>
2111 <para>Total number of send credits on this NID.</para>
2117 <literal> tx </literal></para>
2120 <para>Current number of send credits available on this NID.</para>
2126 <literal> min </literal></para>
2129 <para>Lowest number of send credits available on this NID.</para>
2135 <literal> queue </literal></para>
2138 <para>Total bytes in active/queued sends.</para>
2144 <para><emphasis role="bold"><emphasis role="italic">Analysis:</emphasis></emphasis></para>
2145 <para>Subtracting <literal>max</literal> from <literal>tx</literal>
2146 (<literal>max</literal> - <literal>tx</literal>) yields the number of sends currently
2147 active. A large or increasing number of active sends may indicate a problem.</para>
2149 </itemizedlist></para>
2151 <section remap="h3">
2153 <primary>proc</primary>
2154 <secondary>free space</secondary>
2155 </indexterm>Allocating Free Space on OSTs</title>
2156 <para>Free space is allocated using either a round-robin or a weighted algorithm. The allocation
2157 method is determined by the maximum amount of free-space imbalance between the OSTs. When free
2158 space is relatively balanced across OSTs, the faster round-robin allocator is used, which
2159 maximizes network balancing. The weighted allocator is used when any two OSTs are out of
2160 balance by more than a specified threshold.</para>
2161 <para>Free space distribution can be tuned using these two <literal>/proc</literal>
2165 <para><literal>qos_threshold_rr</literal> - The threshold at which the allocation method
2166 switches from round-robin to weighted is set in this file. The default is to switch to the
2167 weighted algorithm when any two OSTs are out of balance by more than 17 percent.</para>
2170 <para><literal>qos_prio_free</literal> - The weighting priority used by the weighted
2171 allocator can be adjusted in this file. Increasing the value of
2172 <literal>qos_prio_free</literal> puts more weighting on the amount of free space
2173 available on each OST and less on how stripes are distributed across OSTs. The default
2174 value is 91 percent. When the free space priority is set to 100, weighting is based
2175 entirely on free space and location is no longer used by the striping algorthm.</para>
2178 <para condition="l29"><literal>reserved_mb_low</literal> - The low watermark used to stop
2179 object allocation if available space is less than it. The default is 0.1 percent of total
2183 <para condition="l29"><literal>reserved_mb_high</literal> - The high watermark used to start
2184 object allocation if available space is more than it. The default is 0.2 percent of total
2188 <para>For more information about monitoring and managing free space, see <xref
2189 xmlns:xlink="http://www.w3.org/1999/xlink" linkend="dbdoclet.50438209_10424"/>.</para>
2191 <section remap="h3">
2193 <primary>proc</primary>
2194 <secondary>locking</secondary>
2195 </indexterm>Configuring Locking</title>
2196 <para>The <literal>lru_size</literal> parameter is used to control the number of client-side
2197 locks in an LRU cached locks queue. LRU size is dynamic, based on load to optimize the number
2198 of locks available to nodes that have different workloads (e.g., login/build nodes vs. compute
2199 nodes vs. backup nodes).</para>
2200 <para>The total number of locks available is a function of the server RAM. The default limit is
2201 50 locks/1 MB of RAM. If memory pressure is too high, the LRU size is shrunk. The number of
2202 locks on the server is limited to <emphasis role="italic">the number of OSTs per
2203 server</emphasis> * <emphasis role="italic">the number of clients</emphasis> * <emphasis
2204 role="italic">the value of the</emphasis>
2205 <literal>lru_size</literal>
2206 <emphasis role="italic">setting on the client</emphasis> as follows: </para>
2209 <para>To enable automatic LRU sizing, set the <literal>lru_size</literal> parameter to 0. In
2210 this case, the <literal>lru_size</literal> parameter shows the current number of locks
2211 being used on the export. LRU sizing is enabled by default.</para>
2214 <para>To specify a maximum number of locks, set the <literal>lru_size</literal> parameter to
2215 a value other than zero but, normally, less than 100 * <emphasis role="italic">number of
2216 CPUs in client</emphasis>. It is recommended that you only increase the LRU size on a
2217 few login nodes where users access the file system interactively.</para>
2220 <para>To clear the LRU on a single client, and, as a result, flush client cache without changing
2221 the <literal>lru_size</literal> value, run:</para>
2222 <screen>$ lctl set_param ldlm.namespaces.<replaceable>osc_name|mdc_name</replaceable>.lru_size=clear</screen>
2223 <para>If the LRU size is set to be less than the number of existing unused locks, the unused
2224 locks are canceled immediately. Use <literal>echo clear</literal> to cancel all locks without
2225 changing the value.</para>
2227 <para>The <literal>lru_size</literal> parameter can only be set temporarily using
2228 <literal>lctl set_param</literal>; it cannot be set permanently.</para>
2230 <para>To disable LRU sizing, on the Lustre clients, run:</para>
2231 <screen>$ lctl set_param ldlm.namespaces.*osc*.lru_size=$((<replaceable>NR_CPU</replaceable>*100))</screen>
2232 <para>Replace <literal><replaceable>NR_CPU</replaceable></literal> with the number of CPUs on
2234 <para>To determine the number of locks being granted, run:</para>
2235 <screen>$ lctl get_param ldlm.namespaces.*.pool.limit</screen>
2237 <section xml:id="dbdoclet.50438271_87260">
2239 <primary>proc</primary>
2240 <secondary>thread counts</secondary>
2241 </indexterm>Setting MDS and OSS Thread Counts</title>
2242 <para>MDS and OSS thread counts tunable can be used to set the minimum and maximum thread counts
2243 or get the current number of running threads for the services listed in the table
2245 <informaltable frame="all">
2247 <colspec colname="c1" colwidth="50*"/>
2248 <colspec colname="c2" colwidth="50*"/>
2253 <emphasis role="bold">Service</emphasis></para>
2257 <emphasis role="bold">Description</emphasis></para>
2262 <literal> mds.MDS.mdt </literal>
2265 <para>Main metadata operations service</para>
2270 <literal> mds.MDS.mdt_readpage </literal>
2273 <para>Metadata <literal>readdir</literal> service</para>
2278 <literal> mds.MDS.mdt_setattr </literal>
2281 <para>Metadata <literal>setattr/close</literal> operations service </para>
2286 <literal> ost.OSS.ost </literal>
2289 <para>Main data operations service</para>
2294 <literal> ost.OSS.ost_io </literal>
2297 <para>Bulk data I/O services</para>
2302 <literal> ost.OSS.ost_create </literal>
2305 <para>OST object pre-creation service</para>
2310 <literal> ldlm.services.ldlm_canceld </literal>
2313 <para>DLM lock cancel service</para>
2318 <literal> ldlm.services.ldlm_cbd </literal>
2321 <para>DLM lock grant service</para>
2327 <para>For each service, an entry as shown below is
2328 created:<screen>/proc/fs/lustre/<replaceable>service</replaceable>/*/threads_<replaceable>min|max|started</replaceable></screen></para>
2331 <para>To temporarily set this tunable, run:</para>
2332 <screen># lctl <replaceable>get|set</replaceable>_param <replaceable>service</replaceable>.threads_<replaceable>min|max|started</replaceable> </screen>
2335 <para>To permanently set this tunable, run:</para>
2336 <screen># lctl conf_param <replaceable>obdname|fsname.obdtype</replaceable>.threads_<replaceable>min|max|started</replaceable> </screen>
2337 <para condition='l25'>For version 2.5 or later, run:
2338 <screen># lctl set_param -P <replaceable>service</replaceable>.threads_<replaceable>min|max|started</replaceable></screen></para>
2341 <para>The following examples show how to set thread counts and get the number of running threads
2342 for the service <literal>ost_io</literal> using the tunable
2343 <literal><replaceable>service</replaceable>.threads_<replaceable>min|max|started</replaceable></literal>.</para>
2346 <para>To get the number of running threads, run:</para>
2347 <screen># lctl get_param ost.OSS.ost_io.threads_started
2348 ost.OSS.ost_io.threads_started=128</screen>
2351 <para>To set the number of threads to the maximum value (512), run:</para>
2352 <screen># lctl get_param ost.OSS.ost_io.threads_max
2353 ost.OSS.ost_io.threads_max=512</screen>
2356 <para>To set the maximum thread count to 256 instead of 512 (to avoid overloading the
2357 storage or for an array with requests), run:</para>
2358 <screen># lctl set_param ost.OSS.ost_io.threads_max=256
2359 ost.OSS.ost_io.threads_max=256</screen>
2362 <para>To set the maximum thread count to 256 instead of 512 permanently, run:</para>
2363 <screen># lctl conf_param testfs.ost.ost_io.threads_max=256</screen>
2364 <para condition='l25'>For version 2.5 or later, run:
2365 <screen># lctl set_param -P ost.OSS.ost_io.threads_max=256
2366 ost.OSS.ost_io.threads_max=256 </screen> </para>
2369 <para> To check if the <literal>threads_max</literal> setting is active, run:</para>
2370 <screen># lctl get_param ost.OSS.ost_io.threads_max
2371 ost.OSS.ost_io.threads_max=256</screen>
2375 <para>If the number of service threads is changed while the file system is running, the change
2376 may not take effect until the file system is stopped and rest. If the number of service
2377 threads in use exceeds the new <literal>threads_max</literal> value setting, service threads
2378 that are already running will not be stopped.</para>
2380 <para>See also <xref xmlns:xlink="http://www.w3.org/1999/xlink" linkend="lustretuning"/></para>
2382 <section xml:id="dbdoclet.50438271_83523">
2384 <primary>proc</primary>
2385 <secondary>debug</secondary>
2386 </indexterm>Enabling and Interpreting Debugging Logs</title>
2387 <para>By default, a detailed log of all operations is generated to aid in debugging. Flags that
2388 control debugging are found in <literal>/proc/sys/lnet/debug</literal>. </para>
2389 <para>The overhead of debugging can affect the performance of Lustre file system. Therefore, to
2390 minimize the impact on performance, the debug level can be lowered, which affects the amount
2391 of debugging information kept in the internal log buffer but does not alter the amount of
2392 information to goes into syslog. You can raise the debug level when you need to collect logs
2393 to debug problems. </para>
2394 <para>The debugging mask can be set using "symbolic names". The symbolic format is
2395 shown in the examples below.<itemizedlist>
2397 <para>To verify the debug level used, examine the <literal>sysctl</literal> that controls
2398 debugging by running:</para>
2399 <screen># sysctl lnet.debug
2400 lnet.debug = ioctl neterror warning error emerg ha config console</screen>
2403 <para>To turn off debugging (except for network error debugging), run the following
2404 command on all nodes concerned:</para>
2405 <screen># sysctl -w lnet.debug="neterror"
2406 lnet.debug = neterror</screen>
2408 </itemizedlist><itemizedlist>
2410 <para>To turn off debugging completely, run the following command on all nodes
2412 <screen># sysctl -w lnet.debug=0
2413 lnet.debug = 0</screen>
2416 <para>To set an appropriate debug level for a production environment, run:</para>
2417 <screen># sysctl -w lnet.debug="warning dlmtrace error emerg ha rpctrace vfstrace"
2418 lnet.debug = warning dlmtrace error emerg ha rpctrace vfstrace</screen>
2419 <para>The flags shown in this example collect enough high-level information to aid
2420 debugging, but they do not cause any serious performance impact.</para>
2422 </itemizedlist><itemizedlist>
2424 <para>To clear all flags and set new flags, run:</para>
2425 <screen># sysctl -w lnet.debug="warning"
2426 lnet.debug = warning</screen>
2428 </itemizedlist><itemizedlist>
2430 <para>To add new flags to flags that have already been set, precede each one with a
2431 "<literal>+</literal>":</para>
2432 <screen># sysctl -w lnet.debug="+neterror +ha"
2433 lnet.debug = +neterror +ha
2435 lnet.debug = neterror warning ha</screen>
2438 <para>To remove individual flags, precede them with a
2439 "<literal>-</literal>":</para>
2440 <screen># sysctl -w lnet.debug="-ha"
2443 lnet.debug = neterror warning</screen>
2446 <para>To verify or change the debug level, run commands such as the following: :</para>
2447 <screen># lctl get_param debug
2450 # lctl set_param debug=+ha
2451 # lctl get_param debug
2454 # lctl set_param debug=-warning
2455 # lctl get_param debug
2457 neterror ha</screen>
2459 </itemizedlist></para>
2460 <para>Debugging parameters include:</para>
2463 <para><literal>subsystem_debug</literal> - Controls the debug logs for subsystems.</para>
2466 <para><literal>debug_path</literal> - Indicates the location where the debug log is dumped
2467 when triggered automatically or manually. The default path is
2468 <literal>/tmp/lustre-log</literal>.</para>
2471 <para>These parameters are also set using:<screen>sysctl -w lnet.debug={value}</screen></para>
2472 <para>Additional useful parameters: <itemizedlist>
2474 <para><literal>panic_on_lbug</literal> - Causes ''panic'' to be called
2475 when the Lustre software detects an internal problem (an <literal>LBUG</literal> log
2476 entry); panic crashes the node. This is particularly useful when a kernel crash dump
2477 utility is configured. The crash dump is triggered when the internal inconsistency is
2478 detected by the Lustre software. </para>
2481 <para><literal>upcall</literal> - Allows you to specify the path to the binary which will
2482 be invoked when an <literal>LBUG</literal> log entry is encountered. This binary is
2483 called with four parameters:</para>
2484 <para> - The string ''<literal>LBUG</literal>''.</para>
2485 <para> - The file where the <literal>LBUG</literal> occurred.</para>
2486 <para> - The function name.</para>
2487 <para> - The line number in the file</para>
2489 </itemizedlist></para>
2491 <title>Interpreting OST Statistics</title>
2493 <para>See also <xref linkend="dbdoclet.50438219_84890"/> (<literal>llobdstat</literal>) and
2494 <xref linkend="dbdoclet.50438273_80593"/> (<literal>collectl</literal>).</para>
2496 <para>OST <literal>stats</literal> files can be used to provide statistics showing activity
2497 for each OST. For example:</para>
2498 <screen># lctl get_param osc.testfs-OST0000-osc.stats
2499 snapshot_time 1189732762.835363
2504 obd_ping 212</screen>
2505 <para>Use the <literal>llstat</literal> utility to monitor statistics over time.</para>
2506 <para>To clear the statistics, use the <literal>-c</literal> option to
2507 <literal>llstat</literal>. To specify how frequently the statistics should be reported (in
2508 seconds), use the <literal>-i</literal> option. In the example below, the
2509 <literal>-c</literal> option clears the statistics and <literal>-i10</literal> option
2510 reports statistics every 10 seconds:</para>
2511 <screen role="smaller">$ llstat -c -i10 /proc/fs/lustre/ost/OSS/ost_io/stats
2513 /usr/bin/llstat: STATS on 06/06/07
2514 /proc/fs/lustre/ost/OSS/ost_io/ stats on 192.168.16.35@tcp
2515 snapshot_time 1181074093.276072
2517 /proc/fs/lustre/ost/OSS/ost_io/stats @ 1181074103.284895
2519 Count Rate Events Unit last min avg max stddev
2520 req_waittime 8 0 8 [usec] 2078 34 259.75 868 317.49
2521 req_qdepth 8 0 8 [reqs] 1 0 0.12 1 0.35
2522 req_active 8 0 8 [reqs] 11 1 1.38 2 0.52
2523 reqbuf_avail 8 0 8 [bufs] 511 63 63.88 64 0.35
2524 ost_write 8 0 8 [bytes] 169767 72914 212209.62 387579 91874.29
2526 /proc/fs/lustre/ost/OSS/ost_io/stats @ 1181074113.290180
2528 Count Rate Events Unit last min avg max stddev
2529 req_waittime 31 3 39 [usec] 30011 34 822.79 12245 2047.71
2530 req_qdepth 31 3 39 [reqs] 0 0 0.03 1 0.16
2531 req_active 31 3 39 [reqs] 58 1 1.77 3 0.74
2532 reqbuf_avail 31 3 39 [bufs] 1977 63 63.79 64 0.41
2533 ost_write 30 3 38 [bytes] 1028467 15019 315325.16 910694 197776.51
2535 /proc/fs/lustre/ost/OSS/ost_io/stats @ 1181074123.325560
2537 Count Rate Events Unit last min avg max stddev
2538 req_waittime 21 2 60 [usec] 14970 34 784.32 12245 1878.66
2539 req_qdepth 21 2 60 [reqs] 0 0 0.02 1 0.13
2540 req_active 21 2 60 [reqs] 33 1 1.70 3 0.70
2541 reqbuf_avail 21 2 60 [bufs] 1341 63 63.82 64 0.39
2542 ost_write 21 2 59 [bytes] 7648424 15019 332725.08 910694 180397.87
2544 <para>The columns in this example are described in the table below.</para>
2545 <informaltable frame="all">
2547 <colspec colname="c1" colwidth="50*"/>
2548 <colspec colname="c2" colwidth="50*"/>
2552 <para><emphasis role="bold">Parameter</emphasis></para>
2555 <para><emphasis role="bold">Description</emphasis></para>
2561 <entry><literal>Name</literal></entry>
2562 <entry>Name of the service event. See the tables below for descriptions of service
2563 events that are tracked.</entry>
2568 <literal>Cur. Count </literal></para>
2571 <para>Number of events of each type sent in the last interval.</para>
2577 <literal>Cur. Rate </literal></para>
2580 <para>Number of events per second in the last interval.</para>
2586 <literal> # Events </literal></para>
2589 <para>Total number of such events since the events have been cleared.</para>
2595 <literal> Unit </literal></para>
2598 <para>Unit of measurement for that statistic (microseconds, requests,
2605 <literal> last </literal></para>
2608 <para>Average rate of these events (in units/event) for the last interval during
2609 which they arrived. For instance, in the above mentioned case of
2610 <literal>ost_destroy</literal> it took an average of 736 microseconds per
2611 destroy for the 400 object destroys in the previous 10 seconds.</para>
2617 <literal> min </literal></para>
2620 <para>Minimum rate (in units/events) since the service started.</para>
2626 <literal> avg </literal></para>
2629 <para>Average rate.</para>
2635 <literal> max </literal></para>
2638 <para>Maximum rate.</para>
2644 <literal> stddev </literal></para>
2647 <para>Standard deviation (not measured in some cases)</para>
2653 <para>Events common to all services are shown in the table below.</para>
2654 <informaltable frame="all">
2656 <colspec colname="c1" colwidth="50*"/>
2657 <colspec colname="c2" colwidth="50*"/>
2661 <para><emphasis role="bold">Parameter</emphasis></para>
2664 <para><emphasis role="bold">Description</emphasis></para>
2672 <literal> req_waittime </literal></para>
2675 <para>Amount of time a request waited in the queue before being handled by an
2676 available server thread.</para>
2682 <literal> req_qdepth </literal></para>
2685 <para>Number of requests waiting to be handled in the queue for this service.</para>
2691 <literal> req_active </literal></para>
2694 <para>Number of requests currently being handled.</para>
2700 <literal> reqbuf_avail </literal></para>
2703 <para>Number of unsolicited lnet request buffers for this service.</para>
2709 <para>Some service-specific events of interest are described in the table below.</para>
2710 <informaltable frame="all">
2712 <colspec colname="c1" colwidth="50*"/>
2713 <colspec colname="c2" colwidth="50*"/>
2717 <para><emphasis role="bold">Parameter</emphasis></para>
2720 <para><emphasis role="bold">Description</emphasis></para>
2728 <literal> ldlm_enqueue </literal></para>
2731 <para>Time it takes to enqueue a lock (this includes file open on the MDS)</para>
2737 <literal> mds_reint </literal></para>
2740 <para>Time it takes to process an MDS modification record (includes
2741 <literal>create</literal>, <literal>mkdir</literal>, <literal>unlink</literal>,
2742 <literal>rename</literal> and <literal>setattr</literal>)</para>
2750 <title>Interpreting MDT Statistics</title>
2752 <para>See also <xref linkend="dbdoclet.50438219_84890"/> (<literal>llobdstat</literal>) and
2753 <xref linkend="dbdoclet.50438273_80593"/> (<literal>collectl</literal>).</para>
2755 <para>MDT <literal>stats</literal> files can be used to track MDT
2756 statistics for the MDS. The example below shows sample output from an
2757 MDT <literal>stats</literal> file.</para>
2758 <screen># lctl get_param mds.*-MDT0000.stats
2759 snapshot_time 1244832003.676892 secs.usecs
2760 open 2 samples [reqs]
2761 close 1 samples [reqs]
2762 getxattr 3 samples [reqs]
2763 process_config 1 samples [reqs]
2764 connect 2 samples [reqs]
2765 disconnect 2 samples [reqs]
2766 statfs 3 samples [reqs]
2767 setattr 1 samples [reqs]
2768 getattr 3 samples [reqs]
2769 llog_init 6 samples [reqs]
2770 notify 16 samples [reqs]</screen>