1 <?xml version='1.0' encoding='UTF-8'?>
2 <chapter xmlns="http://docbook.org/ns/docbook" xmlns:xl="http://www.w3.org/1999/xlink" version="5.0"
3 xml:lang="en-US" xml:id="lustreproc">
4 <title xml:id="lustreproc.title">LustreProc</title>
5 <para>The <literal>/proc</literal> file system acts as an interface to internal data structures in
6 the kernel. This chapter describes entries in <literal>/proc</literal> that are useful for
7 tuning and monitoring aspects of a Lustre file system. It includes these sections:</para>
10 <para><xref linkend="dbdoclet.50438271_83523"/></para>
15 <title>Introduction to <literal>/proc</literal></title>
16 <para>The <literal>/proc</literal> directory provides an interface to internal data structures
17 in the kernel that enables monitoring and tuning of many aspects of Lustre file system and
18 application performance These data structures include settings and metrics for components such
19 as memory, networking, file systems, and kernel housekeeping routines, which are available
20 throughout the hierarchical file layout in <literal>/proc.</literal>
22 <para>Typically, metrics are accessed by reading from <literal>/proc</literal> files and
23 settings are changed by writing to <literal>/proc</literal> files. Some data is server-only,
24 some data is client-only, and some data is exported from the client to the server and is thus
25 duplicated in both locations.</para>
27 <para>In the examples in this chapter, <literal>#</literal> indicates a command is entered as
28 root. Servers are named according to the convention
29 <literal><replaceable>fsname</replaceable>-<replaceable>MDT|OSTnumber</replaceable></literal>.
30 The standard UNIX wildcard designation (*) is used.</para>
32 <para>In most cases, information is accessed using the <literal>lctl get_param</literal> command
33 and settings are changed using the <literal>lctl set_param</literal> command. Some examples
34 are shown below:</para>
37 <para> To obtain data from a Lustre client:</para>
38 <screen># lctl list_param osc.*
39 osc.testfs-OST0000-osc-ffff881071d5cc00
40 osc.testfs-OST0001-osc-ffff881071d5cc00
41 osc.testfs-OST0002-osc-ffff881071d5cc00
42 osc.testfs-OST0003-osc-ffff881071d5cc00
43 osc.testfs-OST0004-osc-ffff881071d5cc00
44 osc.testfs-OST0005-osc-ffff881071d5cc00
45 osc.testfs-OST0006-osc-ffff881071d5cc00
46 osc.testfs-OST0007-osc-ffff881071d5cc00
47 osc.testfs-OST0008-osc-ffff881071d5cc00</screen>
48 <para>In this example, information about OST connections available on a client is displayed
49 (indicated by "osc").</para>
54 <para> To see multiple levels of parameters, use multiple
55 wildcards:<screen># lctl list_param osc.*.*
56 osc.testfs-OST0000-osc-ffff881071d5cc00.active
57 osc.testfs-OST0000-osc-ffff881071d5cc00.blocksize
58 osc.testfs-OST0000-osc-ffff881071d5cc00.checksum_type
59 osc.testfs-OST0000-osc-ffff881071d5cc00.checksums
60 osc.testfs-OST0000-osc-ffff881071d5cc00.connect_flags
61 osc.testfs-OST0000-osc-ffff881071d5cc00.contention_seconds
62 osc.testfs-OST0000-osc-ffff881071d5cc00.cur_dirty_bytes
64 osc.testfs-OST0000-osc-ffff881071d5cc00.rpc_stats</screen></para>
69 <para> To view a specific file, use <literal>lctl get_param</literal>
70 :<screen># lctl get_param osc.lustre-OST0000-osc-ffff881071d5cc00.rpc_stats</screen></para>
73 <para>For more information about using <literal>lctl</literal>, see <xref
74 xmlns:xlink="http://www.w3.org/1999/xlink" linkend="dbdoclet.50438194_51490"/>.</para>
75 <para>Data can also be viewed using the <literal>cat</literal> command with the full path to the
76 file. The form of the <literal>cat</literal> command is similar to that of the <literal>lctl
77 get_param</literal> command with these differences. In the <literal>cat</literal> command: </para>
80 <para> Replace the dots in the path with slashes.</para>
83 <para> Prepend the path with the following as
84 appropriate:<screen>/proc/{fs,sys}/{lustre,lnet}</screen></para>
87 <para>For example, an <literal>lctl get_param</literal> command may look like
88 this:<screen># lctl get_param osc.*.uuid
89 osc.testfs-OST0000-osc-ffff881071d5cc00.uuid=594db456-0685-bd16-f59b-e72ee90e9819
90 osc.testfs-OST0001-osc-ffff881071d5cc00.uuid=594db456-0685-bd16-f59b-e72ee90e9819
92 <para>The equivalent <literal>cat</literal> command looks like
93 this:<screen># cat /proc/fs/lustre/osc/*/uuid
94 594db456-0685-bd16-f59b-e72ee90e9819
95 594db456-0685-bd16-f59b-e72ee90e9819
97 <para>The <literal>llstat</literal> utility can be used to monitor some Lustre file system I/O
98 activity over a specified time period. For more details, see <xref
99 xmlns:xlink="http://www.w3.org/1999/xlink" linkend="dbdoclet.50438219_23232"/></para>
100 <para>Some data is imported from attached clients and is available in a directory called
101 <literal>exports</literal> located in the corresponding per-service directory on a Lustre
103 example:<screen># ls /proc/fs/lustre/obdfilter/testfs-OST0000/exports/192.168.124.9\@o2ib1/
104 # hash ldlm_stats stats uuid</screen></para>
106 <title>Identifying Lustre File Systems and Servers</title>
107 <para>Several <literal>/proc</literal> files on the MGS list existing Lustre file systems and
108 file system servers. The examples below are for a Lustre file system called
109 <literal>testfs</literal> with one MDT and three OSTs.</para>
112 <para> To view all known Lustre file systems, enter:</para>
113 <screen>mgs# lctl get_param mgs.*.filesystems
117 <para> To view the names of the servers in a file system in which least one server is
119 enter:<screen>lctl get_param mgs.*.live.<replaceable><filesystem name></replaceable></screen></para>
120 <para>For example:</para>
121 <screen>mgs# lctl get_param mgs.*.live.testfs
129 Secure RPC Config Rules:
131 imperative_recovery_state:
135 notify_duration_total: 0.001000
136 notify_duation_max: 0.001000
137 notify_count: 4</screen>
140 <para>To view the names of all live servers in the file system as listed in
141 <literal>/proc/fs/lustre/devices</literal>, enter:</para>
142 <screen># lctl device_list
144 1 UP mgc MGC192.168.10.34@tcp 1f45bb57-d9be-2ddb-c0b0-5431a49226705
145 2 UP mdt MDS MDS_uuid 3
146 3 UP lov testfs-mdtlov testfs-mdtlov_UUID 4
147 4 UP mds testfs-MDT0000 testfs-MDT0000_UUID 7
148 5 UP osc testfs-OST0000-osc testfs-mdtlov_UUID 5
149 6 UP osc testfs-OST0001-osc testfs-mdtlov_UUID 5
150 7 UP lov testfs-clilov-ce63ca00 08ac6584-6c4a-3536-2c6d-b36cf9cbdaa04
151 8 UP mdc testfs-MDT0000-mdc-ce63ca00 08ac6584-6c4a-3536-2c6d-b36cf9cbdaa05
152 9 UP osc testfs-OST0000-osc-ce63ca00 08ac6584-6c4a-3536-2c6d-b36cf9cbdaa05
153 10 UP osc testfs-OST0001-osc-ce63ca00 08ac6584-6c4a-3536-2c6d-b36cf9cbdaa05</screen>
154 <para>The information provided on each line includes:</para>
155 <para> - Device number</para>
156 <para> - Device status (UP, INactive, or STopping) </para>
157 <para> - Device name</para>
158 <para> - Device UUID</para>
159 <para> - Reference count (how many users this device has)</para>
162 <para>To display the name of any server, view the device
163 label:<screen>mds# e2label /dev/sda
164 testfs-MDT0000</screen></para>
170 <title>Tuning Multi-Block Allocation (mballoc)</title>
171 <para>Capabilities supported by <literal>mballoc</literal> include:</para>
174 <para> Pre-allocation for single files to help to reduce fragmentation.</para>
177 <para> Pre-allocation for a group of files to enable packing of small files into large,
178 contiguous chunks.</para>
181 <para> Stream allocation to help decrease the seek rate.</para>
184 <para>The following <literal>mballoc</literal> tunables are available:</para>
185 <informaltable frame="all">
187 <colspec colname="c1" colwidth="30*"/>
188 <colspec colname="c2" colwidth="70*"/>
192 <para><emphasis role="bold">Field</emphasis></para>
195 <para><emphasis role="bold">Description</emphasis></para>
203 <literal>mb_max_to_scan</literal></para>
206 <para>Maximum number of free chunks that <literal>mballoc</literal> finds before a
207 final decision to avoid a livelock situation.</para>
213 <literal>mb_min_to_scan</literal></para>
216 <para>Minimum number of free chunks that <literal>mballoc</literal> searches before
217 picking the best chunk for allocation. This is useful for small requests to reduce
218 fragmentation of big free chunks.</para>
224 <literal>mb_order2_req</literal></para>
227 <para>For requests equal to 2^N, where N >= <literal>mb_order2_req</literal>, a
228 fast search is done using a base 2 buddy allocation service.</para>
234 <literal>mb_small_req</literal></para>
237 <para><literal>mb_small_req</literal> - Defines (in MB) the upper bound of "small
239 <para><literal>mb_large_req</literal> - Defines (in MB) the lower bound of "large
241 <para>Requests are handled differently based on size:<itemizedlist>
243 <para>< <literal>mb_small_req</literal> - Requests are packed together to
244 form large, aggregated requests.</para>
247 <para>> <literal>mb_small_req</literal> and < <literal>mb_large_req</literal>
248 - Requests are primarily allocated linearly.</para>
251 <para>> <literal>mb_large_req</literal> - Requests are allocated since hard disk
252 seek time is less of a concern in this case.</para>
254 </itemizedlist></para>
255 <para>In general, small requests are combined to create larger requests, which are
256 then placed close to one another to minimize the number of seeks required to access
263 <literal>mb_large_req</literal></para>
269 <literal>mb_prealloc_table</literal></para>
272 <para>A table of values used to preallocate space when a new request is received. By
273 default, the table looks like
274 this:<screen>prealloc_table
275 4 8 16 32 64 128 256 512 1024 2048 </screen></para>
276 <para>When a new request is received, space is preallocated at the next higher
277 increment specified in the table. For example, for requests of less than 4 file
278 system blocks, 4 blocks of space are preallocated; for requests between 4 and 8, 8
279 blocks are preallocated; and so forth</para>
280 <para>Although customized values can be entered in the table, the performance of
281 general usage file systems will not typically be improved by modifying the table (in
282 fact, in ext4 systems, the table values are fixed). However, for some specialized
283 workloads, tuning the <literal>prealloc_table</literal> values may result in smarter
284 preallocation decisions. </para>
290 <literal>mb_group_prealloc</literal></para>
293 <para>The amount of space (in kilobytes) preallocated for groups of small
300 <para>Buddy group cache information found in
301 <literal>/proc/fs/ldiskfs/<replaceable>disk_device</replaceable>/mb_groups</literal> may
302 be useful for assessing on-disk fragmentation. For
303 example:<screen>cat /proc/fs/ldiskfs/loop0/mb_groups
304 #group: free free frags first pa [ 2^0 2^1 2^2 2^3 2^4 2^5 2^6 2^7 2^8 2^9
306 #0 : 2936 2936 1 42 0 [ 0 0 0 1 1 1 1 2 0 1
307 2 0 0 0 ]</screen></para>
308 <para>In this example, the columns show:<itemizedlist>
310 <para>#group number</para>
313 <para>Available blocks in the group</para>
316 <para>Blocks free on a disk</para>
319 <para>Number of free fragments</para>
322 <para>First free block in the group</para>
325 <para>Number of preallocated chunks (not blocks)</para>
328 <para>A series of available chunks of different sizes</para>
330 </itemizedlist></para>
333 <title>Monitoring Lustre File System I/O</title>
334 <para>A number of system utilities are provided to enable collection of data related to I/O
335 activity in a Lustre file system. In general, the data collected describes:</para>
338 <para> Data transfer rates and throughput of inputs and outputs external to the Lustre file
339 system, such as network requests or disk I/O operations performed</para>
342 <para> Data about the throughput or transfer rates of internal Lustre file system data, such
343 as locks or allocations. </para>
347 <para>It is highly recommended that you complete baseline testing for your Lustre file system
348 to determine normal I/O activity for your hardware, network, and system workloads. Baseline
349 data will allow you to easily determine when performance becomes degraded in your system.
350 Two particularly useful baseline statistics are:</para>
353 <para><literal>brw_stats</literal> – Histogram data characterizing I/O requests to the
354 OSTs. For more details, see <xref xmlns:xlink="http://www.w3.org/1999/xlink"
355 linkend="dbdoclet.50438271_55057"/>.</para>
358 <para><literal>rpc_stats</literal> – Histogram data showing information about RPCs made by
359 clients. For more details, see <xref xmlns:xlink="http://www.w3.org/1999/xlink"
360 linkend="MonitoringClientRCPStream"/>.</para>
364 <section remap="h3" xml:id="MonitoringClientRCPStream">
366 <primary>proc</primary>
367 <secondary>watching RPC</secondary>
368 </indexterm>Monitoring the Client RPC Stream</title>
369 <para>The <literal>rpc_stats</literal> file contains histogram data showing information about
370 remote procedure calls (RPCs) that have been made since this file was last cleared. The
371 histogram data can be cleared by writing any value into the <literal>rpc_stats</literal>
373 <para><emphasis role="italic"><emphasis role="bold">Example:</emphasis></emphasis></para>
374 <screen># lctl get_param osc.testfs-OST0000-osc-ffff810058d2f800.rpc_stats
375 snapshot_time: 1372786692.389858 (secs.usecs)
376 read RPCs in flight: 0
377 write RPCs in flight: 1
378 dio read RPCs in flight: 0
379 dio write RPCs in flight: 0
380 pending write pages: 256
381 pending read pages: 0
384 pages per rpc rpcs % cum % | rpcs % cum %
393 256: 850 100 100 | 18346 99 100
396 rpcs in flight rpcs % cum % | rpcs % cum %
397 0: 691 81 81 | 1740 9 9
398 1: 48 5 86 | 938 5 14
399 2: 29 3 90 | 1059 5 20
400 3: 17 2 92 | 1052 5 26
401 4: 13 1 93 | 920 5 31
402 5: 12 1 95 | 425 2 33
403 6: 10 1 96 | 389 2 35
404 7: 30 3 100 | 11373 61 97
405 8: 0 0 100 | 460 2 100
408 offset rpcs % cum % | rpcs % cum %
409 0: 850 100 100 | 18347 99 99
417 128: 0 0 100 | 4 0 100
420 <para>The header information includes:</para>
423 <para><literal>snapshot_time</literal> - UNIX epoch instant the file was read.</para>
426 <para><literal>read RPCs in flight</literal> - Number of read RPCs issued by the OSC, but
427 not complete at the time of the snapshot. This value should always be less than or equal
428 to <literal>max_rpcs_in_flight</literal>.</para>
431 <para><literal>write RPCs in flight</literal> - Number of write RPCs issued by the OSC,
432 but not complete at the time of the snapshot. This value should always be less than or
433 equal to <literal>max_rpcs_in_flight</literal>.</para>
436 <para><literal>dio read RPCs in flight</literal> - Direct I/O (as opposed to block I/O)
437 read RPCs issued but not completed at the time of the snapshot.</para>
440 <para><literal>dio write RPCs in flight</literal> - Direct I/O (as opposed to block I/O)
441 write RPCs issued but not completed at the time of the snapshot.</para>
444 <para><literal>pending write pages</literal> - Number of pending write pages that have
445 been queued for I/O in the OSC.</para>
448 <para><literal>pending read pages</literal> - Number of pending read pages that have been
449 queued for I/O in the OSC.</para>
452 <para>The tabular data is described in the table below. Each row in the table shows the number
453 of reads or writes (<literal>ios</literal>) occurring for the statistic, the relative
454 percentage (<literal>%</literal>) of total reads or writes, and the cumulative percentage
455 (<literal>cum %</literal>) to that point in the table for the statistic.</para>
456 <informaltable frame="all">
458 <colspec colname="c1" colwidth="40*"/>
459 <colspec colname="c2" colwidth="60*"/>
463 <para><emphasis role="bold">Field</emphasis></para>
466 <para><emphasis role="bold">Description</emphasis></para>
473 <para> pages per RPC</para>
476 <para>Shows cumulative RPC reads and writes organized according to the number of
477 pages in the RPC. A single page RPC increments the <literal>0:</literal>
483 <para> RPCs in flight</para>
486 <para> Shows the number of RPCs that are pending when an RPC is sent. When the first
487 RPC is sent, the <literal>0:</literal> row is incremented. If the first RPC is
488 sent while another RPC is pending, the <literal>1:</literal> row is incremented
497 <para> The page index of the first page read from or written to the object by the
504 <para><emphasis role="italic"><emphasis role="bold">Analysis:</emphasis></emphasis></para>
505 <para>This table provides a way to visualize the concurrency of the RPC stream. Ideally, you
506 will see a large clump around the <literal>max_rpcs_in_flight value</literal>, which shows
507 that the network is being kept busy.</para>
508 <para>For information about optimizing the client I/O RPC stream, see <xref
509 xmlns:xlink="http://www.w3.org/1999/xlink" linkend="TuningClientIORPCStream"/>.</para>
511 <section xml:id="lustreproc.clientstats" remap="h3">
513 <primary>proc</primary>
514 <secondary>client stats</secondary>
515 </indexterm>Monitoring Client Activity</title>
516 <para>The <literal>stats</literal> file maintains statistics accumulate during typical
517 operation of a client across the VFS interface of the Lustre file system. Only non-zero
518 parameters are displayed in the file. </para>
519 <para>Client statistics are enabled by default.</para>
521 <para>Statistics for all mounted file systems can be discovered by
522 entering:<screen>lctl get_param llite.*.stats</screen></para>
524 <para><emphasis role="italic"><emphasis role="bold">Example:</emphasis></emphasis></para>
525 <screen>client# lctl get_param llite.*.stats
526 snapshot_time 1308343279.169704 secs.usecs
527 dirty_pages_hits 14819716 samples [regs]
528 dirty_pages_misses 81473472 samples [regs]
529 read_bytes 36502963 samples [bytes] 1 26843582 55488794
530 write_bytes 22985001 samples [bytes] 0 125912 3379002
531 brw_read 2279 samples [pages] 1 1 2270
532 ioctl 186749 samples [regs]
533 open 3304805 samples [regs]
534 close 3331323 samples [regs]
535 seek 48222475 samples [regs]
536 fsync 963 samples [regs]
537 truncate 9073 samples [regs]
538 setxattr 19059 samples [regs]
539 getxattr 61169 samples [regs]
541 <para> The statistics can be cleared by echoing an empty string into the
542 <literal>stats</literal> file or by using the command:
543 <screen>lctl set_param llite.*.stats=0</screen></para>
544 <para>The statistics displayed are described in the table below.</para>
545 <informaltable frame="all">
547 <colspec colname="c1" colwidth="3*"/>
548 <colspec colname="c2" colwidth="7*"/>
552 <para><emphasis role="bold">Entry</emphasis></para>
555 <para><emphasis role="bold">Description</emphasis></para>
563 <literal>snapshot_time</literal></para>
566 <para>UNIX epoch instant the stats file was read.</para>
572 <literal>dirty_page_hits</literal></para>
575 <para>The number of write operations that have been satisfied by the dirty page
576 cache. See <xref xmlns:xlink="http://www.w3.org/1999/xlink"
577 linkend="TuningClientIORPCStream"/> for more information about dirty cache
578 behavior in a Lustre file system.</para>
584 <literal>dirty_page_misses</literal></para>
587 <para>The number of write operations that were not satisfied by the dirty page
594 <literal>read_bytes</literal></para>
597 <para>The number of read operations that have occurred. Three additional parameters
598 are displayed:</para>
603 <para>The minimum number of bytes read in a single request since the counter
610 <para>The maximum number of bytes read in a single request since the counter
617 <para>The accumulated sum of bytes of all read requests since the counter was
627 <literal>write_bytes</literal></para>
630 <para>The number of write operations that have occurred. Three additional parameters
631 are displayed:</para>
636 <para>The minimum number of bytes written in a single request since the
637 counter was reset.</para>
643 <para>The maximum number of bytes written in a single request since the
644 counter was reset.</para>
650 <para>The accumulated sum of bytes of all write requests since the counter was
660 <literal>brw_read</literal></para>
663 <para>The number of pages that have been read. Three additional parameters are
669 <para>The minimum number of bytes read in a single block read/write
670 (<literal>brw</literal>) read request since the counter was reset.</para>
676 <para>The maximum number of bytes read in a single <literal>brw</literal> read
677 requests since the counter was reset.</para>
683 <para>The accumulated sum of bytes of all <literal>brw</literal> read requests
684 since the counter was reset.</para>
693 <literal>ioctl</literal></para>
696 <para>The number of combined file and directory <literal>ioctl</literal>
703 <literal>open</literal></para>
706 <para>The number of open operations that have succeeded.</para>
712 <literal>close</literal></para>
715 <para>The number of close operations that have succeeded.</para>
721 <literal>seek</literal></para>
724 <para>The number of times <literal>seek</literal> has been called.</para>
730 <literal>fsync</literal></para>
733 <para>The number of times <literal>fsync</literal> has been called.</para>
739 <literal>truncate</literal></para>
742 <para>The total number of calls to both locked and lockless
743 <literal>truncate</literal>.</para>
749 <literal>setxattr</literal></para>
752 <para>The number of times extended attributes have been set. </para>
758 <literal>getxattr</literal></para>
761 <para>The number of times value(s) of extended attributes have been fetched.</para>
767 <para><emphasis role="italic"><emphasis role="bold">Analysis:</emphasis></emphasis></para>
768 <para>Information is provided about the amount and type of I/O activity is taking place on the
773 <primary>proc</primary>
774 <secondary>read/write survey</secondary>
775 </indexterm>Monitoring Client Read-Write Offset Statistics</title>
776 <para>When the <literal>offset_stats</literal> parameter is set, statistics are maintained for
777 occurrences of a series of read or write calls from a process that did not access the next
778 sequential location. The <literal>OFFSET</literal> field is reset to 0 (zero) whenever a
779 different file is read or written.</para>
781 <para>By default, statistics are not collected in the <literal>offset_stats</literal>,
782 <literal>extents_stats</literal>, and <literal>extents_stats_per_process</literal> files
783 to reduce monitoring overhead when this information is not needed. The collection of
784 statistics in all three of these files is activated by writing anything into any one of
787 <para><emphasis role="italic"><emphasis role="bold">Example:</emphasis></emphasis></para>
788 <screen># lctl get_param llite.testfs-f57dee0.offset_stats
789 snapshot_time: 1155748884.591028 (secs.usecs)
790 RANGE RANGE SMALLEST LARGEST
791 R/W PID START END EXTENT EXTENT OFFSET
792 R 8385 0 128 128 128 0
793 R 8385 0 224 224 224 -128
794 W 8385 0 250 50 100 0
795 W 8385 100 1110 10 500 -150
796 W 8384 0 5233 5233 5233 0
797 R 8385 500 600 100 100 -610</screen>
798 <para>In this example, <literal>snapshot_time</literal> is the UNIX epoch instant the file was
799 read. The tabular data is described in the table below.</para>
800 <para>The <literal>offset_stats</literal> file can be cleared by
801 entering:<screen>lctl set_param llite.*.offset_stats=0</screen></para>
802 <informaltable frame="all">
804 <colspec colname="c1" colwidth="50*"/>
805 <colspec colname="c2" colwidth="50*"/>
809 <para><emphasis role="bold">Field</emphasis></para>
812 <para><emphasis role="bold">Description</emphasis></para>
822 <para>Indicates if the non-sequential call was a read or write</para>
830 <para>Process ID of the process that made the read/write call.</para>
835 <para>RANGE START/RANGE END</para>
838 <para>Range in which the read/write calls were sequential.</para>
843 <para>SMALLEST EXTENT </para>
846 <para>Smallest single read/write in the corresponding range (in bytes).</para>
851 <para>LARGEST EXTENT </para>
854 <para>Largest single read/write in the corresponding range (in bytes).</para>
862 <para>Difference between the previous range end and the current range start.</para>
868 <para><emphasis role="italic"><emphasis role="bold">Analysis:</emphasis></emphasis></para>
869 <para>This data provides an indication of how contiguous or fragmented the data is. For
870 example, the fourth entry in the example above shows the writes for this RPC were sequential
871 in the range 100 to 1110 with the minimum write 10 bytes and the maximum write 500 bytes.
872 The range started with an offset of -150 from the <literal>RANGE END</literal> of the
873 previous entry in the example.</para>
877 <primary>proc</primary>
878 <secondary>read/write survey</secondary>
879 </indexterm>Monitoring Client Read-Write Extent Statistics</title>
880 <para>For in-depth troubleshooting, client read-write extent statistics can be accessed to
881 obtain more detail about read/write I/O extents for the file system or for a particular
884 <para>By default, statistics are not collected in the <literal>offset_stats</literal>,
885 <literal>extents_stats</literal>, and <literal>extents_stats_per_process</literal> files
886 to reduce monitoring overhead when this information is not needed. The collection of
887 statistics in all three of these files is activated by writing anything into any one of
891 <title>Client-Based I/O Extent Size Survey</title>
892 <para>The <literal>extent_stats</literal> histogram in the <literal>llite</literal>
893 directory shows the statistics for the sizes of the read/write I/O extents. This file does
894 not maintain the per-process statistics.</para>
895 <para><emphasis role="italic"><emphasis role="bold">Example:</emphasis></emphasis></para>
896 <screen># lctl get_param llite.testfs-*.extents_stats
897 snapshot_time: 1213828728.348516 (secs.usecs)
899 extents calls % cum% | calls % cum%
901 0K - 4K : 0 0 0 | 2 2 2
902 4K - 8K : 0 0 0 | 0 0 2
903 8K - 16K : 0 0 0 | 0 0 2
904 16K - 32K : 0 0 0 | 20 23 26
905 32K - 64K : 0 0 0 | 0 0 26
906 64K - 128K : 0 0 0 | 51 60 86
907 128K - 256K : 0 0 0 | 0 0 86
908 256K - 512K : 0 0 0 | 0 0 86
909 512K - 1024K : 0 0 0 | 0 0 86
910 1M - 2M : 0 0 0 | 11 13 100</screen>
911 <para>In this example, <literal>snapshot_time</literal> is the UNIX epoch instant the file
912 was read. The table shows cumulative extents organized according to size with statistics
913 provided separately for reads and writes. Each row in the table shows the number of RPCs
914 for reads and writes respectively (<literal>calls</literal>), the relative percentage of
915 total calls (<literal>%</literal>), and the cumulative percentage to that point in the
916 table of calls (<literal>cum %</literal>). </para>
917 <para> The file can be cleared by issuing the following
918 command:<screen># lctl set_param llite.testfs-*.extents_stats=0</screen></para>
921 <title>Per-Process Client I/O Statistics</title>
922 <para>The <literal>extents_stats_per_process</literal> file maintains the I/O extent size
923 statistics on a per-process basis.</para>
924 <para><emphasis role="italic"><emphasis role="bold">Example:</emphasis></emphasis></para>
925 <screen># lctl get_param llite.testfs-*.extents_stats_per_process
926 snapshot_time: 1213828762.204440 (secs.usecs)
928 extents calls % cum% | calls % cum%
931 0K - 4K : 0 0 0 | 0 0 0
932 4K - 8K : 0 0 0 | 0 0 0
933 8K - 16K : 0 0 0 | 0 0 0
934 16K - 32K : 0 0 0 | 0 0 0
935 32K - 64K : 0 0 0 | 0 0 0
936 64K - 128K : 0 0 0 | 0 0 0
937 128K - 256K : 0 0 0 | 0 0 0
938 256K - 512K : 0 0 0 | 0 0 0
939 512K - 1024K : 0 0 0 | 0 0 0
940 1M - 2M : 0 0 0 | 10 100 100
943 0K - 4K : 0 0 0 | 0 0 0
944 4K - 8K : 0 0 0 | 0 0 0
945 8K - 16K : 0 0 0 | 0 0 0
946 16K - 32K : 0 0 0 | 20 100 100
949 0K - 4K : 0 0 0 | 0 0 0
950 4K - 8K : 0 0 0 | 0 0 0
951 8K - 16K : 0 0 0 | 0 0 0
952 16K - 32K : 0 0 0 | 0 0 0
953 32K - 64K : 0 0 0 | 0 0 0
954 64K - 128K : 0 0 0 | 16 100 100
957 0K - 4K : 0 0 0 | 1 100 100
960 0K - 4K : 0 0 0 | 1 100 100
963 <para>This table shows cumulative extents organized according to size for each process ID
964 (PID) with statistics provided separately for reads and writes. Each row in the table
965 shows the number of RPCs for reads and writes respectively (<literal>calls</literal>), the
966 relative percentage of total calls (<literal>%</literal>), and the cumulative percentage
967 to that point in the table of calls (<literal>cum %</literal>). </para>
970 <section xml:id="dbdoclet.50438271_55057">
972 <primary>proc</primary>
973 <secondary>block I/O</secondary>
974 </indexterm>Monitoring the OST Block I/O Stream</title>
975 <para>The <literal>brw_stats</literal> file in the <literal>obdfilter</literal> directory
976 contains histogram data showing statistics for number of I/O requests sent to the disk,
977 their size, and whether they are contiguous on the disk or not.</para>
978 <para><emphasis role="italic"><emphasis role="bold">Example:</emphasis></emphasis></para>
979 <para>Enter on the OSS:</para>
980 <screen># lctl get_param obdfilter.testfs-OST0000.brw_stats
981 snapshot_time: 1372775039.769045 (secs.usecs)
983 pages per bulk r/w rpcs % cum % | rpcs % cum %
984 1: 108 100 100 | 39 0 0
991 128: 0 0 100 | 24 0 0
992 256: 0 0 100 | 23142 99 100
995 discontiguous pages rpcs % cum % | rpcs % cum %
996 0: 108 100 100 | 23245 100 100
999 discontiguous blocks rpcs % cum % | rpcs % cum %
1000 0: 108 100 100 | 23243 99 99
1001 1: 0 0 100 | 2 0 100
1004 disk fragmented I/Os ios % cum % | ios % cum %
1006 1: 14 12 100 | 23243 99 99
1007 2: 0 0 100 | 2 0 100
1010 disk I/Os in flight ios % cum % | ios % cum %
1011 1: 14 100 100 | 20896 89 89
1012 2: 0 0 100 | 1071 4 94
1013 3: 0 0 100 | 573 2 96
1014 4: 0 0 100 | 300 1 98
1015 5: 0 0 100 | 166 0 98
1016 6: 0 0 100 | 108 0 99
1017 7: 0 0 100 | 81 0 99
1018 8: 0 0 100 | 47 0 99
1019 9: 0 0 100 | 5 0 100
1022 I/O time (1/1000s) ios % cum % | ios % cum %
1025 4: 14 12 100 | 27 0 0
1027 16: 0 0 100 | 31 0 0
1028 32: 0 0 100 | 38 0 0
1029 64: 0 0 100 | 18979 81 82
1030 128: 0 0 100 | 943 4 86
1031 256: 0 0 100 | 1233 5 91
1032 512: 0 0 100 | 1825 7 99
1033 1K: 0 0 100 | 99 0 99
1034 2K: 0 0 100 | 0 0 99
1035 4K: 0 0 100 | 0 0 99
1036 8K: 0 0 100 | 49 0 100
1039 disk I/O size ios % cum % | ios % cum %
1040 4K: 14 100 100 | 41 0 0
1042 16K: 0 0 100 | 1 0 0
1043 32K: 0 0 100 | 0 0 0
1044 64K: 0 0 100 | 4 0 0
1045 128K: 0 0 100 | 17 0 0
1046 256K: 0 0 100 | 12 0 0
1047 512K: 0 0 100 | 24 0 0
1048 1M: 0 0 100 | 23142 99 100
1050 <para>The tabular data is described in the table below. Each row in the table shows the number
1051 of reads and writes occurring for the statistic (<literal>ios</literal>), the relative
1052 percentage of total reads or writes (<literal>%</literal>), and the cumulative percentage to
1053 that point in the table for the statistic (<literal>cum %</literal>). </para>
1054 <informaltable frame="all">
1056 <colspec colname="c1" colwidth="40*"/>
1057 <colspec colname="c2" colwidth="60*"/>
1061 <para><emphasis role="bold">Field</emphasis></para>
1064 <para><emphasis role="bold">Description</emphasis></para>
1072 <literal>pages per bulk r/w</literal></para>
1075 <para>Number of pages per RPC request, which should match aggregate client
1076 <literal>rpc_stats</literal> (see <xref
1077 xmlns:xlink="http://www.w3.org/1999/xlink" linkend="MonitoringClientRCPStream"
1084 <literal>discontiguous pages</literal></para>
1087 <para>Number of discontinuities in the logical file offset of each page in a single
1094 <literal>discontiguous blocks</literal></para>
1097 <para>Number of discontinuities in the physical block allocation in the file system
1098 for a single RPC.</para>
1103 <para><literal>disk fragmented I/Os</literal></para>
1106 <para>Number of I/Os that were not written entirely sequentially.</para>
1111 <para><literal>disk I/Os in flight</literal></para>
1114 <para>Number of disk I/Os currently pending.</para>
1119 <para><literal>I/O time (1/1000s)</literal></para>
1122 <para>Amount of time for each I/O operation to complete.</para>
1127 <para><literal>disk I/O size</literal></para>
1130 <para>Size of each I/O operation.</para>
1136 <para><emphasis role="italic"><emphasis role="bold">Analysis:</emphasis></emphasis></para>
1137 <para>This data provides an indication of extent size and distribution in the file
1142 <title>Tuning Lustre File System I/O</title>
1143 <para>Each OSC has its own tree of tunables. For example:</para>
1144 <screen>$ ls -d /proc/fs/testfs/osc/OSC_client_ost1_MNT_client_2 /localhost
1145 /proc/fs/testfs/osc/OSC_uml0_ost1_MNT_localhost
1146 /proc/fs/testfs/osc/OSC_uml0_ost2_MNT_localhost
1147 /proc/fs/testfs/osc/OSC_uml0_ost3_MNT_localhost
1149 $ ls /proc/fs/testfs/osc/OSC_uml0_ost1_MNT_localhost
1150 blocksizefilesfree max_dirty_mb ost_server_uuid stats
1153 <para>The following sections describe some of the parameters that can be tuned in a Lustre file
1155 <section remap="h3" xml:id="TuningClientIORPCStream">
1157 <primary>proc</primary>
1158 <secondary>RPC tunables</secondary>
1159 </indexterm>Tuning the Client I/O RPC Stream</title>
1160 <para>Ideally, an optimal amount of data is packed into each I/O RPC and a consistent number
1161 of issued RPCs are in progress at any time. To help optimize the client I/O RPC stream,
1162 several tuning variables are provided to adjust behavior according to network conditions and
1163 cluster size. For information about monitoring the client I/O RPC stream, see <xref
1164 xmlns:xlink="http://www.w3.org/1999/xlink" linkend="MonitoringClientRCPStream"/>.</para>
1165 <para>RPC stream tunables include:</para>
1169 <para><literal>osc.<replaceable>osc_instance</replaceable>.max_dirty_mb</literal> -
1170 Controls how many MBs of dirty data can be written and queued up in the OSC. POSIX
1171 file writes that are cached contribute to this count. When the limit is reached,
1172 additional writes stall until previously-cached writes are written to the server. This
1173 may be changed by writing a single ASCII integer to the file. Only values between 0
1174 and 2048 or 1/4 of RAM are allowable. If 0 is specified, no writes are cached.
1175 Performance suffers noticeably unless you use large writes (1 MB or more).</para>
1176 <para>To maximize performance, the value for <literal>max_dirty_mb</literal> is
1177 recommended to be 4 * <literal>max_pages_per_rpc </literal>*
1178 <literal>max_rpcs_in_flight</literal>.</para>
1181 <para><literal>osc.<replaceable>osc_instance</replaceable>.cur_dirty_bytes</literal> - A
1182 read-only value that returns the current number of bytes written and cached on this
1186 <para><literal>osc.<replaceable>osc_instance</replaceable>.max_pages_per_rpc</literal> -
1187 The maximum number of pages that will undergo I/O in a single RPC to the OST. The
1188 minimum setting is a single page and the maximum setting is 1024 (for systems with a
1189 <literal>PAGE_SIZE</literal> of 4 KB), with the default maximum of 1 MB in the RPC.
1190 It is also possible to specify a units suffix (e.g. <literal>4M</literal>), so that
1191 the RPC size can be specified independently of the client
1192 <literal>PAGE_SIZE</literal>.</para>
1195 <para><literal>osc.<replaceable>osc_instance</replaceable>.max_rpcs_in_flight</literal>
1196 - The maximum number of concurrent RPCs in flight from an OSC to its OST. If the OSC
1197 tries to initiate an RPC but finds that it already has the same number of RPCs
1198 outstanding, it will wait to issue further RPCs until some complete. The minimum
1199 setting is 1 and maximum setting is 256. </para>
1200 <para>To improve small file I/O performance, increase the
1201 <literal>max_rpcs_in_flight</literal> value.</para>
1204 <para><literal>llite.<replaceable>fsname-instance</replaceable>/max_cache_mb</literal> -
1205 Maximum amount of inactive data cached by the client (default is 3/4 of RAM). For
1207 <screen># lctl get_param llite.testfs-ce63ca00.max_cached_mb
1213 <para>The value for <literal><replaceable>osc_instance</replaceable></literal> is typically
1214 <literal><replaceable>fsname</replaceable>-OST<replaceable>ost_index</replaceable>-osc-<replaceable>mountpoint_instance</replaceable></literal>,
1215 where the value for <literal><replaceable>mountpoint_instance</replaceable></literal> is
1216 unique to each mount point to allow associating osc, mdc, lov, lmv, and llite parameters
1217 with the same mount point. For
1218 example:<screen>lctl get_param osc.testfs-OST0000-osc-ffff88107412f400.rpc_stats
1219 osc.testfs-OST0000-osc-ffff88107412f400.rpc_stats=
1220 snapshot_time: 1375743284.337839 (secs.usecs)
1221 read RPCs in flight: 0
1222 write RPCs in flight: 0
1226 <section remap="h3">
1228 <primary>proc</primary>
1229 <secondary>readahead</secondary>
1230 </indexterm>Tuning File Readahead and Directory Statahead</title>
1231 <para>File readahead and directory statahead enable reading of data into memory before a
1232 process requests the data. File readahead reads file content data into memory and directory
1233 statahead reads metadata into memory. When readahead and statahead work well, a process that
1234 accesses data finds that the information it needs is available immediately when requested in
1235 memory without the delay of network I/O.</para>
1236 <para condition="l22">In Lustre software release 2.2.0, the directory statahead feature was
1237 improved to enhance directory traversal performance. The improvements primarily addressed
1238 two issues: <orderedlist>
1240 <para>A race condition existed between the statahead thread and other VFS operations
1241 while processing asynchronous <literal>getattr</literal> RPC replies, causing
1242 duplicate entries in dcache. This issue was resolved by using statahead local dcache.
1246 <para>File size/block attributes pre-fetching was not supported, so the traversing
1247 thread had to send synchronous glimpse size RPCs to OST(s). This issue was resolved by
1248 using asynchronous glimpse lock (AGL) RPCs to pre-fetch file size/block attributes
1253 <section remap="h4">
1254 <title>Tuning File Readahead</title>
1255 <para>File readahead is triggered when two or more sequential reads by an application fail
1256 to be satisfied by data in the Linux buffer cache. The size of the initial readahead is 1
1257 MB. Additional readaheads grow linearly and increment until the readahead cache on the
1258 client is full at 40 MB.</para>
1259 <para>Readahead tunables include:</para>
1262 <para><literal>llite.<replaceable>fsname-instance</replaceable>.max_read_ahead_mb</literal>
1263 - Controls the maximum amount of data readahead on a file. Files are read ahead in
1264 RPC-sized chunks (1 MB or the size of the <literal>read()</literal> call, if larger)
1265 after the second sequential read on a file descriptor. Random reads are done at the
1266 size of the <literal>read()</literal> call only (no readahead). Reads to
1267 non-contiguous regions of the file reset the readahead algorithm, and readahead is not
1268 triggered again until sequential reads take place again. </para>
1269 <para>To disable readahead, set this tunable to 0. The default value is 40 MB.</para>
1272 <para><literal>llite.<replaceable>fsname-instance</replaceable>.max_read_ahead_whole_mb</literal>
1273 - Controls the maximum size of a file that is read in its entirety, regardless of the
1274 size of the <literal>read()</literal>.</para>
1279 <title>Tuning Directory Statahead and AGL</title>
1280 <para>Many system commands, such as <literal>ls –l</literal>, <literal>du</literal>, and
1281 <literal>find</literal>, traverse a directory sequentially. To make these commands run
1282 efficiently, the directory statahead and asynchronous glimpse lock (AGL) can be enabled to
1283 improve the performance of traversing.</para>
1284 <para>The statahead tunables are:</para>
1287 <para><literal>statahead_max</literal> - Controls whether directory statahead is enabled
1288 and the maximum statahead window size (i.e., how many files can be pre-fetched by the
1289 statahead thread). By default, statahead is enabled and the value of
1290 <literal>statahead_max</literal> is 32.</para>
1291 <para>To disable statahead, run:</para>
1292 <screen>lctl set_param llite.*.statahead_max=0</screen>
1293 <para>To set the maximum statahead window size (<replaceable>n</replaceable>),
1295 <screen>lctl set_param llite.*.statahead_max=<replaceable>n</replaceable></screen>
1296 <para>The maximum value of <replaceable>n</replaceable> is 8192.</para>
1297 <para>The AGL can be controlled by entering:</para>
1298 <screen>lctl set_param llite.*.statahead_agl=<replaceable>n</replaceable></screen>
1299 <para>The default value for <replaceable>n</replaceable> is 1, which enables the AGL. If
1300 <replaceable>n</replaceable> is 0, the AGL is disabled.</para>
1303 <para><literal>statahead_stats</literal> - A read-only interface that indicates the
1304 current statahead and AGL statistics, such as how many times statahead/AGL has been
1305 triggered since the last mount, how many statahead/AGL failures have occurred due to
1306 an incorrect prediction or other causes.</para>
1308 <para>The AGL is affected by statahead because the inodes processed by AGL are built
1309 by the statahead thread, which means the statahead thread is the input of the AGL
1310 pipeline. So if statahead is disabled, then the AGL is disabled by force.</para>
1316 <section remap="h3">
1318 <primary>proc</primary>
1319 <secondary>read cache</secondary>
1320 </indexterm>Tuning OSS Read Cache</title>
1321 <para>The OSS read cache feature provides read-only caching of data on an OSS. This
1322 functionality uses the Linux page cache to store the data and uses as much physical memory
1323 as is allocated.</para>
1324 <para>OSS read cache improves Lustre file system performance in these situations:</para>
1327 <para>Many clients are accessing the same data set (as in HPC applications or when
1328 diskless clients boot from the Lustre file system).</para>
1331 <para>One client is storing data while another client is reading it (i.e., clients are
1332 exchanging data via the OST).</para>
1335 <para>A client has very limited caching of its own.</para>
1338 <para>OSS read cache offers these benefits:</para>
1341 <para>Allows OSTs to cache read data more frequently.</para>
1344 <para>Improves repeated reads to match network speeds instead of disk speeds.</para>
1347 <para>Provides the building blocks for OST write cache (small-write aggregation).</para>
1350 <section remap="h4">
1351 <title>Using OSS Read Cache</title>
1352 <para>OSS read cache is implemented on the OSS, and does not require any special support on
1353 the client side. Since OSS read cache uses the memory available in the Linux page cache,
1354 the appropriate amount of memory for the cache should be determined based on I/O patterns;
1355 if the data is mostly reads, then more cache is required than would be needed for mostly
1357 <para>OSS read cache is managed using the following tunables:</para>
1360 <para><literal>read_cache_enable</literal> - Controls whether data read from disk during
1361 a read request is kept in memory and available for later read requests for the same
1362 data, without having to re-read it from disk. By default, read cache is enabled
1363 (<literal>read_cache_enable=1</literal>).</para>
1364 <para>When the OSS receives a read request from a client, it reads data from disk into
1365 its memory and sends the data as a reply to the request. If read cache is enabled,
1366 this data stays in memory after the request from the client has been fulfilled. When
1367 subsequent read requests for the same data are received, the OSS skips reading data
1368 from disk and the request is fulfilled from the cached data. The read cache is managed
1369 by the Linux kernel globally across all OSTs on that OSS so that the least recently
1370 used cache pages are dropped from memory when the amount of free memory is running
1372 <para>If read cache is disabled (<literal>read_cache_enable=0</literal>), the OSS
1373 discards the data after a read request from the client is serviced and, for subsequent
1374 read requests, the OSS again reads the data from disk.</para>
1375 <para>To disable read cache on all the OSTs of an OSS, run:</para>
1376 <screen>root@oss1# lctl set_param obdfilter.*.read_cache_enable=0</screen>
1377 <para>To re-enable read cache on one OST, run:</para>
1378 <screen>root@oss1# lctl set_param obdfilter.{OST_name}.read_cache_enable=1</screen>
1379 <para>To check if read cache is enabled on all OSTs on an OSS, run:</para>
1380 <screen>root@oss1# lctl get_param obdfilter.*.read_cache_enable</screen>
1383 <para><literal>writethrough_cache_enable</literal> - Controls whether data sent to the
1384 OSS as a write request is kept in the read cache and available for later reads, or if
1385 it is discarded from cache when the write is completed. By default, the writethrough
1386 cache is enabled (<literal>writethrough_cache_enable=1</literal>).</para>
1387 <para>When the OSS receives write requests from a client, it receives data from the
1388 client into its memory and writes the data to disk. If the writethrough cache is
1389 enabled, this data stays in memory after the write request is completed, allowing the
1390 OSS to skip reading this data from disk if a later read request, or partial-page write
1391 request, for the same data is received.</para>
1392 <para>If the writethrough cache is disabled
1393 (<literal>writethrough_cache_enabled=0</literal>), the OSS discards the data after
1394 the write request from the client is completed. For subsequent read requests, or
1395 partial-page write requests, the OSS must re-read the data from disk.</para>
1396 <para>Enabling writethrough cache is advisable if clients are doing small or unaligned
1397 writes that would cause partial-page updates, or if the files written by one node are
1398 immediately being accessed by other nodes. Some examples where enabling writethrough
1399 cache might be useful include producer-consumer I/O models or shared-file writes with
1400 a different node doing I/O not aligned on 4096-byte boundaries. </para>
1401 <para>Disabling the writethrough cache is advisable when files are mostly written to the
1402 file system but are not re-read within a short time period, or files are only written
1403 and re-read by the same node, regardless of whether the I/O is aligned or not.</para>
1404 <para>To disable the writethrough cache on all OSTs of an OSS, run:</para>
1405 <screen>root@oss1# lctl set_param obdfilter.*.writethrough_cache_enable=0</screen>
1406 <para>To re-enable the writethrough cache on one OST, run:</para>
1407 <screen>root@oss1# lctl set_param obdfilter.{OST_name}.writethrough_cache_enable=1</screen>
1408 <para>To check if the writethrough cache is enabled, run:</para>
1409 <screen>root@oss1# lctl set_param obdfilter.*.writethrough_cache_enable=1</screen>
1412 <para><literal>readcache_max_filesize</literal> - Controls the maximum size of a file
1413 that both the read cache and writethrough cache will try to keep in memory. Files
1414 larger than <literal>readcache_max_filesize</literal> will not be kept in cache for
1415 either reads or writes.</para>
1416 <para>Setting this tunable can be useful for workloads where relatively small files are
1417 repeatedly accessed by many clients, such as job startup files, executables, log
1418 files, etc., but large files are read or written only once. By not putting the larger
1419 files into the cache, it is much more likely that more of the smaller files will
1420 remain in cache for a longer time.</para>
1421 <para>When setting <literal>readcache_max_filesize</literal>, the input value can be
1422 specified in bytes, or can have a suffix to indicate other binary units such as
1423 <literal>K</literal> (kilobytes), <literal>M</literal> (megabytes),
1424 <literal>G</literal> (gigabytes), <literal>T</literal> (terabytes), or
1425 <literal>P</literal> (petabytes).</para>
1426 <para>To limit the maximum cached file size to 32 MB on all OSTs of an OSS, run:</para>
1427 <screen>root@oss1# lctl set_param obdfilter.*.readcache_max_filesize=32M</screen>
1428 <para>To disable the maximum cached file size on an OST, run:</para>
1429 <screen>root@oss1# lctl set_param obdfilter.{OST_name}.readcache_max_filesize=-1</screen>
1430 <para>To check the current maximum cached file size on all OSTs of an OSS, run:</para>
1431 <screen>root@oss1# lctl get_param obdfilter.*.readcache_max_filesize</screen>
1438 <primary>proc</primary>
1439 <secondary>OSS journal</secondary>
1440 </indexterm>Enabling OSS Asynchronous Journal Commit</title>
1441 <para>The OSS asynchronous journal commit feature asynchronously writes data to disk without
1442 forcing a journal flush. This reduces the number of seeks and significantly improves
1443 performance on some hardware.</para>
1445 <para>Asynchronous journal commit cannot work with direct I/O-originated writes
1446 (<literal>O_DIRECT</literal> flag set). In this case, a journal flush is forced. </para>
1448 <para>When the asynchronous journal commit feature is enabled, client nodes keep data in the
1449 page cache (a page reference). Lustre clients monitor the last committed transaction number
1450 (<literal>transno</literal>) in messages sent from the OSS to the clients. When a client
1451 sees that the last committed <literal>transno</literal> reported by the OSS is at least
1452 equal to the bulk write <literal>transno</literal>, it releases the reference on the
1453 corresponding pages. To avoid page references being held for too long on clients after a
1454 bulk write, a 7 second ping request is scheduled (the default OSS file system commit time
1455 interval is 5 seconds) after the bulk write reply is received, so the OSS has an opportunity
1456 to report the last committed <literal>transno</literal>.</para>
1457 <para>If the OSS crashes before the journal commit occurs, then intermediate data is lost.
1458 However, OSS recovery functionality incorporated into the asynchronous journal commit
1459 feature causes clients to replay their write requests and compensate for the missing disk
1460 updates by restoring the state of the file system.</para>
1461 <para>By default, <literal>sync_journal</literal> is enabled
1462 (<literal>sync_journal=1</literal>), so that journal entries are committed synchronously.
1463 To enable asynchronous journal commit, set the <literal>sync_journal</literal> parameter to
1464 <literal>0</literal> by entering: </para>
1465 <screen>$ lctl set_param obdfilter.*.sync_journal=0
1466 obdfilter.lol-OST0001.sync_journal=0</screen>
1467 <para>An associated <literal>sync-on-lock-cancel</literal> feature (enabled by default)
1468 addresses a data consistency issue that can result if an OSS crashes after multiple clients
1469 have written data into intersecting regions of an object, and then one of the clients also
1470 crashes. A condition is created in which the POSIX requirement for continuous writes is
1471 violated along with a potential for corrupted data. With
1472 <literal>sync-on-lock-cancel</literal> enabled, if a cancelled lock has any volatile
1473 writes attached to it, the OSS synchronously writes the journal to disk on lock
1474 cancellation. Disabling the <literal>sync-on-lock-cancel</literal> feature may enhance
1475 performance for concurrent write workloads, but it is recommended that you not disable this
1477 <para> The <literal>sync_on_lock_cancel</literal> parameter can be set to the following
1481 <para><literal>always</literal> - Always force a journal flush on lock cancellation
1482 (default when <literal>async_journal</literal> is enabled).</para>
1485 <para><literal>blocking</literal> - Force a journal flush only when the local cancellation
1486 is due to a blocking callback.</para>
1489 <para><literal>never</literal> - Do not force any journal flush (default when
1490 <literal>async_journal</literal> is disabled).</para>
1493 <para>For example, to set <literal>sync_on_lock_cancel</literal> to not to force a journal
1494 flush, use a command similar to:</para>
1495 <screen>$ lctl get_param obdfilter.*.sync_on_lock_cancel
1496 obdfilter.lol-OST0001.sync_on_lock_cancel=never</screen>
1500 <title>Configuring Timeouts in a Lustre File System</title>
1501 <para>In a Lustre file system, RPC timeouts are set using an adaptive timeouts mechanism, which
1502 is enabled by default. Servers track RPC completion times and then report back to clients
1503 estimates for completion times for future RPCs. Clients use these estimates to set RPC
1504 timeout values. If the processing of server requests slows down for any reason, the server
1505 estimates for RPC completion increase, and clients then revise RPC timeout values to allow
1506 more time for RPC completion.</para>
1507 <para>If the RPCs queued on the server approach the RPC timeout specified by the client, to
1508 avoid RPC timeouts and disconnect/reconnect cycles, the server sends an "early reply" to the
1509 client, telling the client to allow more time. Conversely, as server processing speeds up, RPC
1510 timeout values decrease, resulting in faster detection if the server becomes non-responsive
1511 and quicker connection to the failover partner of the server.</para>
1514 <primary>proc</primary>
1515 <secondary>configuring adaptive timeouts</secondary>
1516 </indexterm><indexterm>
1517 <primary>configuring</primary>
1518 <secondary>adaptive timeouts</secondary>
1519 </indexterm><indexterm>
1520 <primary>proc</primary>
1521 <secondary>adaptive timeouts</secondary>
1522 </indexterm>Configuring Adaptive Timeouts</title>
1523 <para>The adaptive timeout parameters in the table below can be set persistently system-wide
1524 using <literal>lctl conf_param</literal> on the MGS. For example, the following command sets
1525 the <literal>at_max</literal> value for all servers and clients associated with the file
1527 <literal>testfs</literal>:<screen>lctl conf_param testfs.sys.at_max=1500</screen></para>
1529 <para>Clients that access multiple Lustre file systems must use the same parameter values
1530 for all file systems.</para>
1532 <informaltable frame="all">
1534 <colspec colname="c1" colwidth="30*"/>
1535 <colspec colname="c2" colwidth="80*"/>
1539 <para><emphasis role="bold">Parameter</emphasis></para>
1542 <para><emphasis role="bold">Description</emphasis></para>
1550 <literal> at_min </literal></para>
1553 <para>Minimum adaptive timeout (in seconds). The default value is 0. The
1554 <literal>at_min</literal> parameter is the minimum processing time that a server
1555 will report. Ideally, <literal>at_min</literal> should be set to its default
1556 value. Clients base their timeouts on this value, but they do not use this value
1558 <para>If, for unknown reasons (usually due to temporary network outages), the
1559 adaptive timeout value is too short and clients time out their RPCs, you can
1560 increase the <literal>at_min</literal> value to compensate for this.</para>
1566 <literal> at_max </literal></para>
1569 <para>Maximum adaptive timeout (in seconds). The <literal>at_max</literal> parameter
1570 is an upper-limit on the service time estimate. If <literal>at_max</literal> is
1571 reached, an RPC request times out.</para>
1572 <para>Setting <literal>at_max</literal> to 0 causes adaptive timeouts to be disabled
1573 and a fixed timeout method to be used instead (see <xref
1574 xmlns:xlink="http://www.w3.org/1999/xlink" linkend="section_c24_nt5_dl"/></para>
1576 <para>If slow hardware causes the service estimate to increase beyond the default
1577 value of <literal>at_max</literal>, increase <literal>at_max</literal> to the
1578 maximum time you are willing to wait for an RPC completion.</para>
1585 <literal> at_history </literal></para>
1588 <para>Time period (in seconds) within which adaptive timeouts remember the slowest
1589 event that occurred. The default is 600.</para>
1595 <literal> at_early_margin </literal></para>
1598 <para>Amount of time before the Lustre server sends an early reply (in seconds).
1599 Default is 5.</para>
1605 <literal> at_extra </literal></para>
1608 <para>Incremental amount of time that a server requests with each early reply (in
1609 seconds). The server does not know how much time the RPC will take, so it asks for
1610 a fixed value. The default is 30, which provides a balance between sending too
1611 many early replies for the same RPC and overestimating the actual completion
1613 <para>When a server finds a queued request about to time out and needs to send an
1614 early reply out, the server adds the <literal>at_extra</literal> value. If the
1615 time expires, the Lustre server drops the request, and the client enters recovery
1616 status and reconnects to restore the connection to normal status.</para>
1617 <para>If you see multiple early replies for the same RPC asking for 30-second
1618 increases, change the <literal>at_extra</literal> value to a larger number to cut
1619 down on early replies sent and, therefore, network load.</para>
1625 <literal> ldlm_enqueue_min </literal></para>
1628 <para>Minimum lock enqueue time (in seconds). The default is 100. The time it takes
1629 to enqueue a lock, <literal>ldlm_enqueue</literal>, is the maximum of the measured
1630 enqueue estimate (influenced by <literal>at_min</literal> and
1631 <literal>at_max</literal> parameters), multiplied by a weighting factor and the
1632 value of <literal>ldlm_enqueue_min</literal>. </para>
1633 <para>Lustre Distributed Lock Manager (LDLM) lock enqueues have a dedicated minimum
1634 value for <literal>ldlm_enqueue_min</literal>. Lock enqueue timeouts increase as
1635 the measured enqueue times increase (similar to adaptive timeouts).</para>
1642 <title>Interpreting Adaptive Timeout Information</title>
1643 <para>Adaptive timeout information can be obtained from the <literal>timeouts</literal>
1644 files in <literal>/proc/fs/lustre/*/</literal> on each server and client using the
1645 <literal>lctl</literal> command. To read information from a <literal>timeouts</literal>
1646 file, enter a command similar to:</para>
1647 <screen># lctl get_param -n ost.*.ost_io.timeouts
1648 service : cur 33 worst 34 (at 1193427052, 0d0h26m40s ago) 1 1 33 2</screen>
1649 <para>In this example, the <literal>ost_io</literal> service on this node is currently
1650 reporting an estimated RPC service time of 33 seconds. The worst RPC service time was 34
1651 seconds, which occurred 26 minutes ago.</para>
1652 <para>The output also provides a history of service times. Four "bins" of adaptive
1653 timeout history are shown, with the maximum RPC time in each bin reported. In both the
1654 0-150s bin and the 150-300s bin, the maximum RPC time was 1. The 300-450s bin shows the
1655 worst (maximum) RPC time at 33 seconds, and the 450-600s bin shows a maximum of RPC time
1656 of 2 seconds. The estimated service time is the maximum value across the four bins (33
1657 seconds in this example).</para>
1658 <para>Service times (as reported by the servers) are also tracked in the client OBDs, as
1659 shown in this example:</para>
1660 <screen># lctl get_param osc.*.timeouts
1661 last reply : 1193428639, 0d0h00m00s ago
1662 network : cur 1 worst 2 (at 1193427053, 0d0h26m26s ago) 1 1 1 1
1663 portal 6 : cur 33 worst 34 (at 1193427052, 0d0h26m27s ago) 33 33 33 2
1664 portal 28 : cur 1 worst 1 (at 1193426141, 0d0h41m38s ago) 1 1 1 1
1665 portal 7 : cur 1 worst 1 (at 1193426141, 0d0h41m38s ago) 1 0 1 1
1666 portal 17 : cur 1 worst 1 (at 1193426177, 0d0h41m02s ago) 1 0 0 1
1668 <para>In this example, portal 6, the <literal>ost_io</literal> service portal, shows the
1669 history of service estimates reported by the portal.</para>
1670 <para>Server statistic files also show the range of estimates including min, max, sum, and
1671 sumsq. For example:</para>
1672 <screen># lctl get_param mdt.*.mdt.stats
1674 req_timeout 6 samples [sec] 1 10 15 105
1679 <section xml:id="section_c24_nt5_dl">
1680 <title>Setting Static Timeouts<indexterm>
1681 <primary>proc</primary>
1682 <secondary>static timeouts</secondary>
1683 </indexterm></title>
1684 <para>The Lustre software provides two sets of static (fixed) timeouts, LND timeouts and
1685 Lustre timeouts, which are used when adaptive timeouts are not enabled.</para>
1689 <para><emphasis role="italic"><emphasis role="bold">LND timeouts</emphasis></emphasis> -
1690 LND timeouts ensure that point-to-point communications across a network complete in a
1691 finite time in the presence of failures, such as packages lost or broken connections.
1692 LND timeout parameters are set for each individual LND.</para>
1693 <para>LND timeouts are logged with the <literal>S_LND</literal> flag set. They are not
1694 printed as console messages, so check the Lustre log for <literal>D_NETERROR</literal>
1695 messages or enable printing of <literal>D_NETERROR</literal> messages to the console
1696 using:<screen>lctl set_param printk=+neterror</screen></para>
1697 <para>Congested routers can be a source of spurious LND timeouts. To avoid this
1698 situation, increase the number of LNET router buffers to reduce back-pressure and/or
1699 increase LND timeouts on all nodes on all connected networks. Also consider increasing
1700 the total number of LNET router nodes in the system so that the aggregate router
1701 bandwidth matches the aggregate server bandwidth.</para>
1704 <para><emphasis role="italic"><emphasis role="bold">Lustre timeouts
1705 </emphasis></emphasis>- Lustre timeouts ensure that Lustre RPCs complete in a finite
1706 time in the presence of failures when adaptive timeouts are not enabled. Adaptive
1707 timeouts are enabled by default. To disable adaptive timeouts at run time, set
1708 <literal>at_max</literal> to 0 by running on the
1709 MGS:<screen># lctl conf_param <replaceable>fsname</replaceable>.sys.at_max=0</screen></para>
1711 <para>Changing the status of adaptive timeouts at runtime may cause a transient client
1712 timeout, recovery, and reconnection.</para>
1714 <para>Lustre timeouts are always printed as console messages. </para>
1715 <para>If Lustre timeouts are not accompanied by LND timeouts, increase the Lustre
1716 timeout on both servers and clients. Lustre timeouts are set using a command such as
1717 the following:<screen># lctl set_param timeout=30</screen></para>
1718 <para>Lustre timeout parameters are described in the table below.</para>
1721 <informaltable frame="all">
1723 <colspec colname="c1" colnum="1" colwidth="30*"/>
1724 <colspec colname="c2" colnum="2" colwidth="70*"/>
1727 <entry>Parameter</entry>
1728 <entry>Description</entry>
1733 <entry><literal>timeout</literal></entry>
1735 <para>The time that a client waits for a server to complete an RPC (default 100s).
1736 Servers wait half this time for a normal client RPC to complete and a quarter of
1737 this time for a single bulk request (read or write of up to 4 MB) to complete.
1738 The client pings recoverable targets (MDS and OSTs) at one quarter of the
1739 timeout, and the server waits one and a half times the timeout before evicting a
1740 client for being "stale."</para>
1741 <para>Lustre client sends periodic 'ping' messages to servers with which
1742 it has had no communication for the specified period of time. Any network
1743 activity between a client and a server in the file system also serves as a
1748 <entry><literal>ldlm_timeout</literal></entry>
1750 <para>The time that a server waits for a client to reply to an initial AST (lock
1751 cancellation request). The default is 20s for an OST and 6s for an MDS. If the
1752 client replies to the AST, the server will give it a normal timeout (half the
1753 client timeout) to flush any dirty data and release the lock.</para>
1757 <entry><literal>fail_loc</literal></entry>
1759 <para>An internal debugging failure hook. The default value of
1760 <literal>0</literal> means that no failure will be triggered or
1765 <entry><literal>dump_on_timeout</literal></entry>
1767 <para>Triggers a dump of the Lustre debug log when a timeout occurs. The default
1768 value of <literal>0</literal> (zero) means a dump of the Lustre debug log will
1769 not be triggered.</para>
1773 <entry><literal>dump_on_eviction</literal></entry>
1775 <para>Triggers a dump of the Lustre debug log when an eviction occurs. The default
1776 value of <literal>0</literal> (zero) means a dump of the Lustre debug log will
1777 not be triggered. </para>
1786 <section remap="h3">
1788 <primary>proc</primary>
1789 <secondary>LNET</secondary>
1790 </indexterm><indexterm>
1791 <primary>LNET</primary>
1792 <secondary>proc</secondary>
1793 </indexterm>Monitoring LNET</title>
1794 <para>LNET information is located in <literal>/proc/sys/lnet</literal> in these files:<itemizedlist>
1796 <para><literal>peers</literal> - Shows all NIDs known to this node and provides
1797 information on the queue state.</para>
1798 <para>Example:</para>
1799 <screen># lctl get_param peers
1800 nid refs state max rtr min tx min queue
1801 0@lo 1 ~rtr 0 0 0 0 0 0
1802 192.168.10.35@tcp 1 ~rtr 8 8 8 8 6 0
1803 192.168.10.36@tcp 1 ~rtr 8 8 8 8 6 0
1804 192.168.10.37@tcp 1 ~rtr 8 8 8 8 6 0</screen>
1805 <para>The fields are explained in the table below:</para>
1806 <informaltable frame="all">
1808 <colspec colname="c1" colwidth="30*"/>
1809 <colspec colname="c2" colwidth="80*"/>
1813 <para><emphasis role="bold">Field</emphasis></para>
1816 <para><emphasis role="bold">Description</emphasis></para>
1824 <literal>refs</literal>
1828 <para>A reference count. </para>
1834 <literal>state</literal>
1838 <para>If the node is a router, indicates the state of the router. Possible
1842 <para><literal>NA</literal> - Indicates the node is not a router.</para>
1845 <para><literal>up/down</literal>- Indicates if the node (router) is up or
1854 <literal>max </literal></para>
1857 <para>Maximum number of concurrent sends from this peer.</para>
1863 <literal>rtr </literal></para>
1866 <para>Number of routing buffer credits.</para>
1872 <literal>min </literal></para>
1875 <para>Minimum number of routing buffer credits seen.</para>
1881 <literal>tx </literal></para>
1884 <para>Number of send credits.</para>
1890 <literal>min </literal></para>
1893 <para>Minimum number of send credits seen.</para>
1899 <literal>queue </literal></para>
1902 <para>Total bytes in active/queued sends.</para>
1908 <para>Credits are initialized to allow a certain number of operations (in the example
1909 above the table, eight as shown in the <literal>max</literal> column. LNET keeps track
1910 of the minimum number of credits ever seen over time showing the peak congestion that
1911 has occurred during the time monitored. Fewer available credits indicates a more
1912 congested resource. </para>
1913 <para>The number of credits currently in flight (number of transmit credits) is shown in
1914 the <literal>tx</literal> column. The maximum number of send credits available is shown
1915 in the <literal>max</literal> column and never changes. The number of router buffers
1916 available for consumption by a peer is shown in the <literal>rtr</literal>
1918 <para>Therefore, <literal>rtr</literal> – <literal>tx</literal> is the number of transmits
1919 in flight. Typically, <literal>rtr == max</literal>, although a configuration can be set
1920 such that <literal>max >= rtr</literal>. The ratio of routing buffer credits to send
1921 credits (<literal>rtr/tx</literal>) that is less than <literal>max</literal> indicates
1922 operations are in progress. If the ratio <literal>rtr/tx</literal> is greater than
1923 <literal>max</literal>, operations are blocking.</para>
1924 <para>LNET also limits concurrent sends and number of router buffers allocated to a single
1925 peer so that no peer can occupy all these resources.</para>
1928 <para><literal>nis</literal> - Shows the current queue health on this node.</para>
1929 <para>Example:</para>
1930 <screen># lctl get_param nis
1931 nid refs peer max tx min
1933 192.168.10.34@tcp 4 8 256 256 252
1935 <para> The fields are explained in the table below.</para>
1936 <informaltable frame="all">
1938 <colspec colname="c1" colwidth="30*"/>
1939 <colspec colname="c2" colwidth="80*"/>
1943 <para><emphasis role="bold">Field</emphasis></para>
1946 <para><emphasis role="bold">Description</emphasis></para>
1954 <literal> nid </literal></para>
1957 <para>Network interface.</para>
1963 <literal> refs </literal></para>
1966 <para>Internal reference counter.</para>
1972 <literal> peer </literal></para>
1975 <para>Number of peer-to-peer send credits on this NID. Credits are used to size
1976 buffer pools.</para>
1982 <literal> max </literal></para>
1985 <para>Total number of send credits on this NID.</para>
1991 <literal> tx </literal></para>
1994 <para>Current number of send credits available on this NID.</para>
2000 <literal> min </literal></para>
2003 <para>Lowest number of send credits available on this NID.</para>
2009 <literal> queue </literal></para>
2012 <para>Total bytes in active/queued sends.</para>
2018 <para><emphasis role="bold"><emphasis role="italic">Analysis:</emphasis></emphasis></para>
2019 <para>Subtracting <literal>max</literal> from <literal>tx</literal>
2020 (<literal>max</literal> - <literal>tx</literal>) yields the number of sends currently
2021 active. A large or increasing number of active sends may indicate a problem.</para>
2023 </itemizedlist></para>
2025 <section remap="h3">
2027 <primary>proc</primary>
2028 <secondary>free space</secondary>
2029 </indexterm>Allocating Free Space on OSTs</title>
2030 <para>Free space is allocated using either a round-robin or a weighted algorithm. The allocation
2031 method is determined by the maximum amount of free-space imbalance between the OSTs. When free
2032 space is relatively balanced across OSTs, the faster round-robin allocator is used, which
2033 maximizes network balancing. The weighted allocator is used when any two OSTs are out of
2034 balance by more than a specified threshold.</para>
2035 <para>Free space distribution can be tuned using these two <literal>/proc</literal>
2039 <para><literal>qos_threshold_rr</literal> - The threshold at which the allocation method
2040 switches from round-robin to weighted is set in this file. The default is to switch to the
2041 weighted algorithm when any two OSTs are out of balance by more than 17 percent.</para>
2044 <para><literal>qos_prio_free</literal> - The weighting priority used by the weighted
2045 allocator can be adjusted in this file. Increasing the value of
2046 <literal>qos_prio_free</literal> puts more weighting on the amount of free space
2047 available on each OST and less on how stripes are distributed across OSTs. The default
2048 value is 91 percent. When the free space priority is set to 100, weighting is based
2049 entirely on free space and location is no longer used by the striping algorthm.</para>
2052 <para>For more information about monitoring and managing free space, see <xref
2053 xmlns:xlink="http://www.w3.org/1999/xlink" linkend="dbdoclet.50438209_10424"/>.</para>
2055 <section remap="h3">
2057 <primary>proc</primary>
2058 <secondary>locking</secondary>
2059 </indexterm>Configuring Locking</title>
2060 <para>The <literal>lru_size</literal> parameter is used to control the number of client-side
2061 locks in an LRU cached locks queue. LRU size is dynamic, based on load to optimize the number
2062 of locks available to nodes that have different workloads (e.g., login/build nodes vs. compute
2063 nodes vs. backup nodes).</para>
2064 <para>The total number of locks available is a function of the server RAM. The default limit is
2065 50 locks/1 MB of RAM. If memory pressure is too high, the LRU size is shrunk. The number of
2066 locks on the server is limited to <emphasis role="italic">the number of OSTs per
2067 server</emphasis> * <emphasis role="italic">the number of clients</emphasis> * <emphasis
2068 role="italic">the value of the</emphasis>
2069 <literal>lru_size</literal>
2070 <emphasis role="italic">setting on the client</emphasis> as follows: </para>
2073 <para>To enable automatic LRU sizing, set the <literal>lru_size</literal> parameter to 0. In
2074 this case, the <literal>lru_size</literal> parameter shows the current number of locks
2075 being used on the export. LRU sizing is enabled by default.</para>
2078 <para>To specify a maximum number of locks, set the <literal>lru_size</literal> parameter to
2079 a value other than zero but, normally, less than 100 * <emphasis role="italic">number of
2080 CPUs in client</emphasis>. It is recommended that you only increase the LRU size on a
2081 few login nodes where users access the file system interactively.</para>
2084 <para>To clear the LRU on a single client, and, as a result, flush client cache without changing
2085 the <literal>lru_size</literal> value, run:</para>
2086 <screen>$ lctl set_param ldlm.namespaces.<replaceable>osc_name|mdc_name</replaceable>.lru_size=clear</screen>
2087 <para>If the LRU size is set to be less than the number of existing unused locks, the unused
2088 locks are canceled immediately. Use <literal>echo clear</literal> to cancel all locks without
2089 changing the value.</para>
2091 <para>The <literal>lru_size</literal> parameter can only be set temporarily using
2092 <literal>lctl set_param</literal>; it cannot be set permanently.</para>
2094 <para>To disable LRU sizing, on the Lustre clients, run:</para>
2095 <screen>$ lctl set_param ldlm.namespaces.*osc*.lru_size=$((<replaceable>NR_CPU</replaceable>*100))</screen>
2096 <para>Replace <literal><replaceable>NR_CPU</replaceable></literal> with the number of CPUs on
2098 <para>To determine the number of locks being granted, run:</para>
2099 <screen>$ lctl get_param ldlm.namespaces.*.pool.limit</screen>
2101 <section xml:id="dbdoclet.50438271_87260">
2103 <primary>proc</primary>
2104 <secondary>thread counts</secondary>
2105 </indexterm>Setting MDS and OSS Thread Counts</title>
2106 <para>MDS and OSS thread counts tunable can be used to set the minimum and maximum thread counts
2107 or get the current number of running threads for the services listed in the table
2109 <informaltable frame="all">
2111 <colspec colname="c1" colwidth="50*"/>
2112 <colspec colname="c2" colwidth="50*"/>
2117 <emphasis role="bold">Service</emphasis></para>
2121 <emphasis role="bold">Description</emphasis></para>
2126 <literal> mds.MDS.mdt </literal>
2129 <para>Main metadata operations service</para>
2134 <literal> mds.MDS.mdt_readpage </literal>
2137 <para>Metadata <literal>readdir</literal> service</para>
2142 <literal> mds.MDS.mdt_setattr </literal>
2145 <para>Metadata <literal>setattr/close</literal> operations service </para>
2150 <literal> ost.OSS.ost </literal>
2153 <para>Main data operations service</para>
2158 <literal> ost.OSS.ost_io </literal>
2161 <para>Bulk data I/O services</para>
2166 <literal> ost.OSS.ost_create </literal>
2169 <para>OST object pre-creation service</para>
2174 <literal> ldlm.services.ldlm_canceld </literal>
2177 <para>DLM lock cancel service</para>
2182 <literal> ldlm.services.ldlm_cbd </literal>
2185 <para>DLM lock grant service</para>
2191 <para>For each service, an entry as shown below is
2192 created:<screen>/proc/fs/lustre/<replaceable>service</replaceable>/*/threads_<replaceable>min|max|started</replaceable></screen></para>
2195 <para>To temporarily set this tunable, run:</para>
2196 <screen># lctl <replaceable>get|set</replaceable>_param <replaceable>service</replaceable>.threads_<replaceable>min|max|started</replaceable> </screen>
2199 <para>To permanently set this tunable, run:</para>
2200 <screen># lctl conf_param <replaceable>obdname|fsname.obdtype</replaceable>.threads_<replaceable>min|max|started</replaceable> </screen>
2201 <para condition='l25'>For version 2.5 or later, run:
2202 <screen># lctl set_param -P <replaceable>service</replaceable>.threads_<replaceable>min|max|started</replaceable></screen></para>
2205 <para>The following examples show how to set thread counts and get the number of running threads
2206 for the service <literal>ost_io</literal> using the tunable
2207 <literal><replaceable>service</replaceable>.threads_<replaceable>min|max|started</replaceable></literal>.</para>
2210 <para>To get the number of running threads, run:</para>
2211 <screen># lctl get_param ost.OSS.ost_io.threads_started
2212 ost.OSS.ost_io.threads_started=128</screen>
2215 <para>To set the number of threads to the maximum value (512), run:</para>
2216 <screen># lctl get_param ost.OSS.ost_io.threads_max
2217 ost.OSS.ost_io.threads_max=512</screen>
2220 <para>To set the maximum thread count to 256 instead of 512 (to avoid overloading the
2221 storage or for an array with requests), run:</para>
2222 <screen># lctl set_param ost.OSS.ost_io.threads_max=256
2223 ost.OSS.ost_io.threads_max=256</screen>
2226 <para>To set the maximum thread count to 256 instead of 512 permanently, run:</para>
2227 <screen># lctl conf_param testfs.ost.ost_io.threads_max=256</screen>
2228 <para condition='l25'>For version 2.5 or later, run:
2229 <screen># lctl set_param -P ost.OSS.ost_io.threads_max=256
2230 ost.OSS.ost_io.threads_max=256 </screen> </para>
2233 <para> To check if the <literal>threads_max</literal> setting is active, run:</para>
2234 <screen># lctl get_param ost.OSS.ost_io.threads_max
2235 ost.OSS.ost_io.threads_max=256</screen>
2239 <para>If the number of service threads is changed while the file system is running, the change
2240 may not take effect until the file system is stopped and rest. If the number of service
2241 threads in use exceeds the new <literal>threads_max</literal> value setting, service threads
2242 that are already running will not be stopped.</para>
2244 <para>See also <xref xmlns:xlink="http://www.w3.org/1999/xlink" linkend="lustretuning"/></para>
2246 <section xml:id="dbdoclet.50438271_83523">
2248 <primary>proc</primary>
2249 <secondary>debug</secondary>
2250 </indexterm>Enabling and Interpreting Debugging Logs</title>
2251 <para>By default, a detailed log of all operations is generated to aid in debugging. Flags that
2252 control debugging are found in <literal>/proc/sys/lnet/debug</literal>. </para>
2253 <para>The overhead of debugging can affect the performance of Lustre file system. Therefore, to
2254 minimize the impact on performance, the debug level can be lowered, which affects the amount
2255 of debugging information kept in the internal log buffer but does not alter the amount of
2256 information to goes into syslog. You can raise the debug level when you need to collect logs
2257 to debug problems. </para>
2258 <para>The debugging mask can be set using "symbolic names". The symbolic format is
2259 shown in the examples below.<itemizedlist>
2261 <para>To verify the debug level used, examine the <literal>sysctl</literal> that controls
2262 debugging by running:</para>
2263 <screen># sysctl lnet.debug
2264 lnet.debug = ioctl neterror warning error emerg ha config console</screen>
2267 <para>To turn off debugging (except for network error debugging), run the following
2268 command on all nodes concerned:</para>
2269 <screen># sysctl -w lnet.debug="neterror"
2270 lnet.debug = neterror</screen>
2272 </itemizedlist><itemizedlist>
2274 <para>To turn off debugging completely, run the following command on all nodes
2276 <screen># sysctl -w lnet.debug=0
2277 lnet.debug = 0</screen>
2280 <para>To set an appropriate debug level for a production environment, run:</para>
2281 <screen># sysctl -w lnet.debug="warning dlmtrace error emerg ha rpctrace vfstrace"
2282 lnet.debug = warning dlmtrace error emerg ha rpctrace vfstrace</screen>
2283 <para>The flags shown in this example collect enough high-level information to aid
2284 debugging, but they do not cause any serious performance impact.</para>
2286 </itemizedlist><itemizedlist>
2288 <para>To clear all flags and set new flags, run:</para>
2289 <screen># sysctl -w lnet.debug="warning"
2290 lnet.debug = warning</screen>
2292 </itemizedlist><itemizedlist>
2294 <para>To add new flags to flags that have already been set, precede each one with a
2295 "<literal>+</literal>":</para>
2296 <screen># sysctl -w lnet.debug="+neterror +ha"
2297 lnet.debug = +neterror +ha
2299 lnet.debug = neterror warning ha</screen>
2302 <para>To remove individual flags, precede them with a
2303 "<literal>-</literal>":</para>
2304 <screen># sysctl -w lnet.debug="-ha"
2307 lnet.debug = neterror warning</screen>
2310 <para>To verify or change the debug level, run commands such as the following: :</para>
2311 <screen># lctl get_param debug
2314 # lctl set_param debug=+ha
2315 # lctl get_param debug
2318 # lctl set_param debug=-warning
2319 # lctl get_param debug
2321 neterror ha</screen>
2323 </itemizedlist></para>
2324 <para>Debugging parameters include:</para>
2327 <para><literal>subsystem_debug</literal> - Controls the debug logs for subsystems.</para>
2330 <para><literal>debug_path</literal> - Indicates the location where the debug log is dumped
2331 when triggered automatically or manually. The default path is
2332 <literal>/tmp/lustre-log</literal>.</para>
2335 <para>These parameters are also set using:<screen>sysctl -w lnet.debug={value}</screen></para>
2336 <para>Additional useful parameters: <itemizedlist>
2338 <para><literal>panic_on_lbug</literal> - Causes ''panic'' to be called
2339 when the Lustre software detects an internal problem (an <literal>LBUG</literal> log
2340 entry); panic crashes the node. This is particularly useful when a kernel crash dump
2341 utility is configured. The crash dump is triggered when the internal inconsistency is
2342 detected by the Lustre software. </para>
2345 <para><literal>upcall</literal> - Allows you to specify the path to the binary which will
2346 be invoked when an <literal>LBUG</literal> log entry is encountered. This binary is
2347 called with four parameters:</para>
2348 <para> - The string ''<literal>LBUG</literal>''.</para>
2349 <para> - The file where the <literal>LBUG</literal> occurred.</para>
2350 <para> - The function name.</para>
2351 <para> - The line number in the file</para>
2353 </itemizedlist></para>
2355 <title>Interpreting OST Statistics</title>
2357 <para>See also <xref linkend="dbdoclet.50438219_84890"/> (<literal>llobdstat</literal>) and
2358 <xref linkend="dbdoclet.50438273_80593"/> (<literal>collectl</literal>).</para>
2360 <para>OST <literal>stats</literal> files can be used to provide statistics showing activity
2361 for each OST. For example:</para>
2362 <screen># lctl get_param osc.testfs-OST0000-osc.stats
2363 snapshot_time 1189732762.835363
2368 obd_ping 212</screen>
2369 <para>Use the <literal>llstat</literal> utility to monitor statistics over time.</para>
2370 <para>To clear the statistics, use the <literal>-c</literal> option to
2371 <literal>llstat</literal>. To specify how frequently the statistics should be reported (in
2372 seconds), use the <literal>-i</literal> option. In the example below, the
2373 <literal>-c</literal> option clears the statistics and <literal>-i10</literal> option
2374 reports statistics every 10 seconds:</para>
2375 <screen role="smaller">$ llstat -c -i10 /proc/fs/lustre/ost/OSS/ost_io/stats
2377 /usr/bin/llstat: STATS on 06/06/07
2378 /proc/fs/lustre/ost/OSS/ost_io/ stats on 192.168.16.35@tcp
2379 snapshot_time 1181074093.276072
2381 /proc/fs/lustre/ost/OSS/ost_io/stats @ 1181074103.284895
2383 Count Rate Events Unit last min avg max stddev
2384 req_waittime 8 0 8 [usec] 2078 34 259.75 868 317.49
2385 req_qdepth 8 0 8 [reqs] 1 0 0.12 1 0.35
2386 req_active 8 0 8 [reqs] 11 1 1.38 2 0.52
2387 reqbuf_avail 8 0 8 [bufs] 511 63 63.88 64 0.35
2388 ost_write 8 0 8 [bytes] 169767 72914 212209.62 387579 91874.29
2390 /proc/fs/lustre/ost/OSS/ost_io/stats @ 1181074113.290180
2392 Count Rate Events Unit last min avg max stddev
2393 req_waittime 31 3 39 [usec] 30011 34 822.79 12245 2047.71
2394 req_qdepth 31 3 39 [reqs] 0 0 0.03 1 0.16
2395 req_active 31 3 39 [reqs] 58 1 1.77 3 0.74
2396 reqbuf_avail 31 3 39 [bufs] 1977 63 63.79 64 0.41
2397 ost_write 30 3 38 [bytes] 1028467 15019 315325.16 910694 197776.51
2399 /proc/fs/lustre/ost/OSS/ost_io/stats @ 1181074123.325560
2401 Count Rate Events Unit last min avg max stddev
2402 req_waittime 21 2 60 [usec] 14970 34 784.32 12245 1878.66
2403 req_qdepth 21 2 60 [reqs] 0 0 0.02 1 0.13
2404 req_active 21 2 60 [reqs] 33 1 1.70 3 0.70
2405 reqbuf_avail 21 2 60 [bufs] 1341 63 63.82 64 0.39
2406 ost_write 21 2 59 [bytes] 7648424 15019 332725.08 910694 180397.87
2408 <para>The columns in this example are described in the table below.</para>
2409 <informaltable frame="all">
2411 <colspec colname="c1" colwidth="50*"/>
2412 <colspec colname="c2" colwidth="50*"/>
2416 <para><emphasis role="bold">Parameter</emphasis></para>
2419 <para><emphasis role="bold">Description</emphasis></para>
2425 <entry><literal>Name</literal></entry>
2426 <entry>Name of the service event. See the tables below for descriptions of service
2427 events that are tracked.</entry>
2432 <literal>Cur. Count </literal></para>
2435 <para>Number of events of each type sent in the last interval.</para>
2441 <literal>Cur. Rate </literal></para>
2444 <para>Number of events per second in the last interval.</para>
2450 <literal> # Events </literal></para>
2453 <para>Total number of such events since the events have been cleared.</para>
2459 <literal> Unit </literal></para>
2462 <para>Unit of measurement for that statistic (microseconds, requests,
2469 <literal> last </literal></para>
2472 <para>Average rate of these events (in units/event) for the last interval during
2473 which they arrived. For instance, in the above mentioned case of
2474 <literal>ost_destroy</literal> it took an average of 736 microseconds per
2475 destroy for the 400 object destroys in the previous 10 seconds.</para>
2481 <literal> min </literal></para>
2484 <para>Minimum rate (in units/events) since the service started.</para>
2490 <literal> avg </literal></para>
2493 <para>Average rate.</para>
2499 <literal> max </literal></para>
2502 <para>Maximum rate.</para>
2508 <literal> stddev </literal></para>
2511 <para>Standard deviation (not measured in some cases)</para>
2517 <para>Events common to all services are shown in the table below.</para>
2518 <informaltable frame="all">
2520 <colspec colname="c1" colwidth="50*"/>
2521 <colspec colname="c2" colwidth="50*"/>
2525 <para><emphasis role="bold">Parameter</emphasis></para>
2528 <para><emphasis role="bold">Description</emphasis></para>
2536 <literal> req_waittime </literal></para>
2539 <para>Amount of time a request waited in the queue before being handled by an
2540 available server thread.</para>
2546 <literal> req_qdepth </literal></para>
2549 <para>Number of requests waiting to be handled in the queue for this service.</para>
2555 <literal> req_active </literal></para>
2558 <para>Number of requests currently being handled.</para>
2564 <literal> reqbuf_avail </literal></para>
2567 <para>Number of unsolicited lnet request buffers for this service.</para>
2573 <para>Some service-specific events of interest are described in the table below.</para>
2574 <informaltable frame="all">
2576 <colspec colname="c1" colwidth="50*"/>
2577 <colspec colname="c2" colwidth="50*"/>
2581 <para><emphasis role="bold">Parameter</emphasis></para>
2584 <para><emphasis role="bold">Description</emphasis></para>
2592 <literal> ldlm_enqueue </literal></para>
2595 <para>Time it takes to enqueue a lock (this includes file open on the MDS)</para>
2601 <literal> mds_reint </literal></para>
2604 <para>Time it takes to process an MDS modification record (includes
2605 <literal>create</literal>, <literal>mkdir</literal>, <literal>unlink</literal>,
2606 <literal>rename</literal> and <literal>setattr</literal>)</para>
2614 <title>Interpreting MDT Statistics</title>
2616 <para>See also <xref linkend="dbdoclet.50438219_84890"/> (<literal>llobdstat</literal>) and
2617 <xref linkend="dbdoclet.50438273_80593"/> (<literal>collectl</literal>).</para>
2619 <para>MDT <literal>stats</literal> files can be used to track MDT statistics for the MDS. The
2620 example below shows sample output from an MDT <literal>stats</literal> file.</para>
2621 <screen># lctl get_param mds.*-MDT0000.stats
2622 snapshot_time 1244832003.676892 secs.usecs
2623 open 2 samples [reqs]
2624 close 1 samples [reqs]
2625 getxattr 3 samples [reqs]
2626 process_config 1 samples [reqs]
2627 connect 2 samples [reqs]
2628 disconnect 2 samples [reqs]
2629 statfs 3 samples [reqs]
2630 setattr 1 samples [reqs]
2631 getattr 3 samples [reqs]
2632 llog_init 6 samples [reqs]
2633 notify 16 samples [reqs]</screen>