1 <?xml version='1.0' encoding='utf-8'?>
2 <chapter xmlns="http://docbook.org/ns/docbook"
3 xmlns:xl="http://www.w3.org/1999/xlink" version="5.0" xml:lang="en-US"
5 <title xml:id="lustretuning.title">Tuning a Lustre File System</title>
6 <para>This chapter contains information about tuning a Lustre file system for
7 better performance.</para>
9 <para>Many options in the Lustre software are set by means of kernel module
10 parameters. These parameters are contained in the
11 <literal>/etc/modprobe.d/lustre.conf</literal> file.</para>
13 <section xml:id="dbdoclet.50438272_55226">
16 <primary>tuning</primary>
19 <primary>tuning</primary>
20 <secondary>service threads</secondary>
21 </indexterm>Optimizing the Number of Service Threads</title>
22 <para>An OSS can have a minimum of two service threads and a maximum of 512
23 service threads. The number of service threads is a function of how much
24 RAM and how many CPUs are on each OSS node (1 thread / 128MB * num_cpus).
25 If the load on the OSS node is high, new service threads will be started in
26 order to process more requests concurrently, up to 4x the initial number of
27 threads (subject to the maximum of 512). For a 2GB 2-CPU system, the
28 default thread count is 32 and the maximum thread count is 128.</para>
29 <para>Increasing the size of the thread pool may help when:</para>
32 <para>Several OSTs are exported from a single OSS</para>
35 <para>Back-end storage is running synchronously</para>
38 <para>I/O completions take excessive time due to slow storage</para>
41 <para>Decreasing the size of the thread pool may help if:</para>
44 <para>Clients are overwhelming the storage capacity</para>
47 <para>There are lots of "slow I/O" or similar messages</para>
50 <para>Increasing the number of I/O threads allows the kernel and storage to
51 aggregate many writes together for more efficient disk I/O. The OSS thread
52 pool is shared--each thread allocates approximately 1.5 MB (maximum RPC
53 size + 0.5 MB) for internal I/O buffers.</para>
54 <para>It is very important to consider memory consumption when increasing
55 the thread pool size. Drives are only able to sustain a certain amount of
56 parallel I/O activity before performance is degraded, due to the high
57 number of seeks and the OST threads just waiting for I/O. In this
58 situation, it may be advisable to decrease the load by decreasing the
59 number of OST threads.</para>
60 <para>Determining the optimum number of OSS threads is a process of trial
61 and error, and varies for each particular configuration. Variables include
62 the number of OSTs on each OSS, number and speed of disks, RAID
63 configuration, and available RAM. You may want to start with a number of
64 OST threads equal to the number of actual disk spindles on the node. If you
65 use RAID, subtract any dead spindles not used for actual data (e.g., 1 of N
66 of spindles for RAID5, 2 of N spindles for RAID6), and monitor the
67 performance of clients during usual workloads. If performance is degraded,
68 increase the thread count and see how that works until performance is
69 degraded again or you reach satisfactory performance.</para>
71 <para>If there are too many threads, the latency for individual I/O
72 requests can become very high and should be avoided. Set the desired
73 maximum thread count permanently using the method described above.</para>
78 <primary>tuning</primary>
79 <secondary>OSS threads</secondary>
80 </indexterm>Specifying the OSS Service Thread Count</title>
82 <literal>oss_num_threads</literal> parameter enables the number of OST
83 service threads to be specified at module load time on the OSS
86 options ost oss_num_threads={N}
88 <para>After startup, the minimum and maximum number of OSS thread counts
90 <literal>{service}.thread_{min,max,started}</literal> tunable. To change
91 the tunable at runtime, run:</para>
94 lctl {get,set}_param {service}.thread_{min,max,started}
98 This works in a similar fashion to
99 binding of threads on MDS. MDS thread tuning is covered in
100 <xref linkend="dbdoclet.mdsbinding" />.</para>
104 <literal>oss_cpts=[EXPRESSION]</literal> binds the default OSS service
106 <literal>[EXPRESSION]</literal>.</para>
110 <literal>oss_io_cpts=[EXPRESSION]</literal> binds the IO OSS service
112 <literal>[EXPRESSION]</literal>.</para>
115 <para>For further details, see
116 <xref linkend="dbdoclet.50438271_87260" />.</para>
118 <section xml:id="dbdoclet.mdstuning">
121 <primary>tuning</primary>
122 <secondary>MDS threads</secondary>
123 </indexterm>Specifying the MDS Service Thread Count</title>
125 <literal>mds_num_threads</literal> parameter enables the number of MDS
126 service threads to be specified at module load time on the MDS
128 <screen>options mds mds_num_threads={N}</screen>
129 <para>After startup, the minimum and maximum number of MDS thread counts
131 <literal>{service}.thread_{min,max,started}</literal> tunable. To change
132 the tunable at runtime, run:</para>
135 lctl {get,set}_param {service}.thread_{min,max,started}
138 <para>For details, see
139 <xref linkend="dbdoclet.50438271_87260" />.</para>
140 <para>The number of MDS service threads started depends on system size
141 and the load on the server, and has a default maximum of 64. The
142 maximum potential number of threads (<literal>MDS_MAX_THREADS</literal>)
145 <para>The OSS and MDS start two threads per service per CPT at mount
146 time, and dynamically increase the number of running service threads in
147 response to server load. Setting the <literal>*_num_threads</literal>
148 module parameter starts the specified number of threads for that
149 service immediately and disables automatic thread creation behavior.
152 <para>Parameters are available to provide administrators control
153 over the number of service threads.</para>
157 <literal>mds_rdpg_num_threads</literal> controls the number of threads
158 in providing the read page service. The read page service handles
159 file close and readdir operations.</para>
164 <section xml:id="dbdoclet.mdsbinding">
167 <primary>tuning</primary>
168 <secondary>MDS binding</secondary>
169 </indexterm>Binding MDS Service Thread to CPU Partitions</title>
170 <para>With the Node Affinity (<xref linkend="nodeaffdef" />) feature,
171 MDS threads can be bound to particular CPU partitions (CPTs) to improve CPU
172 cache usage and memory locality. Default values for CPT counts and CPU core
173 bindings are selected automatically to provide good overall performance for
174 a given CPU count. However, an administrator can deviate from these setting
175 if they choose. For details on specifying the mapping of CPU cores to
176 CPTs see <xref linkend="dbdoclet.libcfstuning"/>.
181 <literal>mds_num_cpts=[EXPRESSION]</literal> binds the default MDS
182 service threads to CPTs defined by
183 <literal>EXPRESSION</literal>. For example
184 <literal>mds_num_cpts=[0-3]</literal> will bind the MDS service threads
186 <literal>CPT[0,1,2,3]</literal>.</para>
190 <literal>mds_rdpg_num_cpts=[EXPRESSION]</literal> binds the read page
191 service threads to CPTs defined by
192 <literal>EXPRESSION</literal>. The read page service handles file close
193 and readdir requests. For example
194 <literal>mds_rdpg_num_cpts=[4]</literal> will bind the read page threads
196 <literal>CPT4</literal>.</para>
199 <para>Parameters must be set before module load in the file
200 <literal>/etc/modprobe.d/lustre.conf</literal>. For example:
201 <example><title>lustre.conf</title>
202 <screen>options lnet networks=tcp0(eth0)
203 options mdt mds_num_cpts=[0]</screen>
207 <section xml:id="dbdoclet.50438272_73839">
210 <primary>LNet</primary>
211 <secondary>tuning</secondary>
214 <primary>tuning</primary>
215 <secondary>LNet</secondary>
216 </indexterm>Tuning LNet Parameters</title>
217 <para>This section describes LNet tunables, the use of which may be
218 necessary on some systems to improve performance. To test the performance
219 of your Lustre network, see
220 <xref linkend='lnetselftest' />.</para>
222 <title>Transmit and Receive Buffer Size</title>
223 <para>The kernel allocates buffers for sending and receiving messages on
226 <literal>ksocklnd</literal> has separate parameters for the transmit and
227 receive buffers.</para>
229 options ksocklnd tx_buffer_size=0 rx_buffer_size=0
231 <para>If these parameters are left at the default value (0), the system
232 automatically tunes the transmit and receive buffer size. In almost every
233 case, this default produces the best performance. Do not attempt to tune
234 these parameters unless you are a network expert.</para>
237 <title>Hardware Interrupts (
238 <literal>enable_irq_affinity</literal>)</title>
239 <para>The hardware interrupts that are generated by network adapters may
240 be handled by any CPU in the system. In some cases, we would like network
241 traffic to remain local to a single CPU to help keep the processor cache
242 warm and minimize the impact of context switches. This is helpful when an
243 SMP system has more than one network interface and ideal when the number
244 of interfaces equals the number of CPUs. To enable the
245 <literal>enable_irq_affinity</literal> parameter, enter:</para>
247 options ksocklnd enable_irq_affinity=1
249 <para>In other cases, if you have an SMP platform with a single fast
250 interface such as 10 Gb Ethernet and more than two CPUs, you may see
251 performance improve by turning this parameter off.</para>
253 options ksocklnd enable_irq_affinity=0
255 <para>By default, this parameter is off. As always, you should test the
256 performance to compare the impact of changing this parameter.</para>
261 <primary>tuning</primary>
262 <secondary>Network interface binding</secondary>
263 </indexterm>Binding Network Interface Against CPU Partitions</title>
264 <para>Lustre allows enhanced network interface control. This means that
265 an administrator can bind an interface to one or more CPU partitions.
266 Bindings are specified as options to the LNet modules. For more
267 information on specifying module options, see
268 <xref linkend="dbdoclet.50438293_15350" /></para>
270 <literal>o2ib0(ib0)[0,1]</literal> will ensure that all messages for
271 <literal>o2ib0</literal> will be handled by LND threads executing on
272 <literal>CPT0</literal> and
273 <literal>CPT1</literal>. An additional example might be:
274 <literal>tcp1(eth0)[0]</literal>. Messages for
275 <literal>tcp1</literal> are handled by threads on
276 <literal>CPT0</literal>.</para>
281 <primary>tuning</primary>
282 <secondary>Network interface credits</secondary>
283 </indexterm>Network Interface Credits</title>
284 <para>Network interface (NI) credits are shared across all CPU partitions
285 (CPT). For example, if a machine has four CPTs and the number of NI
286 credits is 512, then each partition has 128 credits. If a large number of
287 CPTs exist on the system, LNet checks and validates the NI credits for
288 each CPT to ensure each CPT has a workable number of credits. For
289 example, if a machine has 16 CPTs and the number of NI credits is 256,
290 then each partition only has 16 credits. 16 NI credits is low and could
291 negatively impact performance. As a result, LNet automatically adjusts
293 <literal>peer_credits</literal>(
294 <literal>peer_credits</literal> is 8 by default), so each partition has 64
296 <para>Increasing the number of
297 <literal>credits</literal>/
298 <literal>peer_credits</literal> can improve the performance of high
299 latency networks (at the cost of consuming more memory) by enabling LNet
300 to send more inflight messages to a specific network/peer and keep the
301 pipeline saturated.</para>
302 <para>An administrator can modify the NI credit count using
303 <literal>ksoclnd</literal> or
304 <literal>ko2iblnd</literal>. In the example below, 256 credits are
305 applied to TCP connections.</para>
309 <para>Applying 256 credits to IB connections can be achieved with:</para>
314 <para>LNet may revalidate the NI credits, so the administrator's
315 request may not persist.</para>
321 <primary>tuning</primary>
322 <secondary>router buffers</secondary>
323 </indexterm>Router Buffers</title>
324 <para>When a node is set up as an LNet router, three pools of buffers are
325 allocated: tiny, small and large. These pools are allocated per CPU
326 partition and are used to buffer messages that arrive at the router to be
327 forwarded to the next hop. The three different buffer sizes accommodate
328 different size messages.</para>
329 <para>If a message arrives that can fit in a tiny buffer then a tiny
330 buffer is used, if a message doesn’t fit in a tiny buffer, but fits in a
331 small buffer, then a small buffer is used. Finally if a message does not
332 fit in either a tiny buffer or a small buffer, a large buffer is
334 <para>Router buffers are shared by all CPU partitions. For a machine with
335 a large number of CPTs, the router buffer number may need to be specified
336 manually for best performance. A low number of router buffers risks
337 starving the CPU partitions of resources.</para>
341 <literal>tiny_router_buffers</literal>: Zero payload buffers used for
342 signals and acknowledgements.</para>
346 <literal>small_router_buffers</literal>: 4 KB payload buffers for
347 small messages</para>
351 <literal>large_router_buffers</literal>: 1 MB maximum payload
352 buffers, corresponding to the recommended RPC size of 1 MB.</para>
355 <para>The default setting for router buffers typically results in
356 acceptable performance. LNet automatically sets a default value to reduce
357 the likelihood of resource starvation. The size of a router buffer can be
358 modified as shown in the example below. In this example, the size of the
359 large buffer is modified using the
360 <literal>large_router_buffers</literal> parameter.</para>
362 lnet large_router_buffers=8192
365 <para>LNet may revalidate the router buffer setting, so the
366 administrator's request may not persist.</para>
372 <primary>tuning</primary>
373 <secondary>portal round-robin</secondary>
374 </indexterm>Portal Round-Robin</title>
375 <para>Portal round-robin defines the policy LNet applies to deliver
376 events and messages to the upper layers. The upper layers are PLRPC
377 service or LNet selftest.</para>
378 <para>If portal round-robin is disabled, LNet will deliver messages to
379 CPTs based on a hash of the source NID. Hence, all messages from a
380 specific peer will be handled by the same CPT. This can reduce data
381 traffic between CPUs. However, for some workloads, this behavior may
382 result in poorly balancing loads across the CPU.</para>
383 <para>If portal round-robin is enabled, LNet will round-robin incoming
384 events across all CPTs. This may balance load better across the CPU but
385 can incur a cross CPU overhead.</para>
386 <para>The current policy can be changed by an administrator with
388 <replaceable>value</replaceable>>
389 /proc/sys/lnet/portal_rotor</literal>. There are four options for
391 <replaceable>value</replaceable>
396 <literal>OFF</literal>
398 <para>Disable portal round-robin on all incoming requests.</para>
402 <literal>ON</literal>
404 <para>Enable portal round-robin on all incoming requests.</para>
408 <literal>RR_RT</literal>
410 <para>Enable portal round-robin only for routed messages.</para>
414 <literal>HASH_RT</literal>
416 <para>Routed messages will be delivered to the upper layer by hash of
417 source NID (instead of NID of router.) This is the default
423 <title>LNet Peer Health</title>
424 <para>Two options are available to help determine peer health:
428 <literal>peer_timeout</literal>- The timeout (in seconds) before an
429 aliveness query is sent to a peer. For example, if
430 <literal>peer_timeout</literal> is set to
431 <literal>180sec</literal>, an aliveness query is sent to the peer
432 every 180 seconds. This feature only takes effect if the node is
433 configured as an LNet router.</para>
434 <para>In a routed environment, the
435 <literal>peer_timeout</literal> feature should always be on (set to a
436 value in seconds) on routers. If the router checker has been enabled,
437 the feature should be turned off by setting it to 0 on clients and
439 <para>For a non-routed scenario, enabling the
440 <literal>peer_timeout</literal> option provides health information
441 such as whether a peer is alive or not. For example, a client is able
442 to determine if an MGS or OST is up when it sends it a message. If a
443 response is received, the peer is alive; otherwise a timeout occurs
444 when the request is made.</para>
446 <literal>peer_timeout</literal> should be set to no less than the LND
447 timeout setting. For more information about LND timeouts, see
448 <xref xmlns:xlink="http://www.w3.org/1999/xlink"
449 linkend="section_c24_nt5_dl" />.</para>
451 <literal>o2iblnd</literal>(IB) driver is used,
452 <literal>peer_timeout</literal> should be at least twice the value of
454 <literal>ko2iblnd</literal> keepalive option. for more information
455 about keepalive options, see
456 <xref xmlns:xlink="http://www.w3.org/1999/xlink"
457 linkend="section_ngq_qhy_zl" />.</para>
461 <literal>avoid_asym_router_failure</literal>– When set to 1, the
462 router checker running on the client or a server periodically pings
463 all the routers corresponding to the NIDs identified in the routes
464 parameter setting on the node to determine the status of each router
465 interface. The default setting is 1. (For more information about the
466 LNet routes parameter, see
467 <xref xmlns:xlink="http://www.w3.org/1999/xlink"
468 linkend="lnet_module_routes" /></para>
469 <para>A router is considered down if any of its NIDs are down. For
470 example, router X has three NIDs:
471 <literal>Xnid1</literal>,
472 <literal>Xnid2</literal>, and
473 <literal>Xnid3</literal>. A client is connected to the router via
474 <literal>Xnid1</literal>. The client has router checker enabled. The
475 router checker periodically sends a ping to the router via
476 <literal>Xnid1</literal>. The router responds to the ping with the
477 status of each of its NIDs. In this case, it responds with
478 <literal>Xnid1=up</literal>,
479 <literal>Xnid2=up</literal>,
480 <literal>Xnid3=down</literal>. If
481 <literal>avoid_asym_router_failure==1</literal>, the router is
482 considered down if any of its NIDs are down, so router X is
483 considered down and will not be used for routing messages. If
484 <literal>avoid_asym_router_failure==0</literal>, router X will
485 continue to be used for routing messages.</para>
487 </itemizedlist></para>
488 <para>The following router checker parameters must be set to the maximum
489 value of the corresponding setting for this option on any client or
494 <literal>dead_router_check_interval</literal>
499 <literal>live_router_check_interval</literal>
504 <literal>router_ping_timeout</literal>
507 </itemizedlist></para>
508 <para>For example, the
509 <literal>dead_router_check_interval</literal> parameter on any router must
513 <section xml:id="dbdoclet.libcfstuning">
516 <primary>tuning</primary>
517 <secondary>libcfs</secondary>
518 </indexterm>libcfs Tuning</title>
519 <para>Lustre allows binding service threads via CPU Partition Tables
520 (CPTs). This allows the system administrator to fine-tune on which CPU
521 cores the Lustre service threads are run, for both OSS and MDS services,
522 as well as on the client.
524 <para>CPTs are useful to reserve some cores on the OSS or MDS nodes for
525 system functions such as system monitoring, HA heartbeat, or similar
526 tasks. On the client it may be useful to restrict Lustre RPC service
527 threads to a small subset of cores so that they do not interfere with
528 computation, or because these cores are directly attached to the network
531 <para>By default, the Lustre software will automatically generate CPU
532 partitions (CPT) based on the number of CPUs in the system.
533 The CPT count can be explicitly set on the libcfs module using
534 <literal>cpu_npartitions=<replaceable>NUMBER</replaceable></literal>.
535 The value of <literal>cpu_npartitions</literal> must be an integer between
536 1 and the number of online CPUs.
538 <para condition='l29'>In Lustre 2.9 and later the default is to use
539 one CPT per NUMA node. In earlier versions of Lustre, by default there
540 was a single CPT if the online CPU core count was four or fewer, and
541 additional CPTs would be created depending on the number of CPU cores,
542 typically with 4-8 cores per CPT.
545 <para>Setting <literal>cpu_npartitions=1</literal> will disable most
546 of the SMP Node Affinity functionality.</para>
549 <title>CPU Partition String Patterns</title>
550 <para>CPU partitions can be described using string pattern notation.
551 If <literal>cpu_pattern=N</literal> is used, then there will be one
552 CPT for each NUMA node in the system, with each CPT mapping all of
553 the CPU cores for that NUMA node.
555 <para>It is also possible to explicitly specify the mapping between
556 CPU cores and CPTs, for example:</para>
560 <literal>cpu_pattern="0[2,4,6] 1[3,5,7]</literal>
562 <para>Create two CPTs, CPT0 contains cores 2, 4, and 6, while CPT1
563 contains cores 3, 5, 7. CPU cores 0 and 1 will not be used by Lustre
564 service threads, and could be used for node services such as
565 system monitoring, HA heartbeat threads, etc. The binding of
566 non-Lustre services to those CPU cores may be done in userspace
567 using <literal>numactl(8)</literal> or other application-specific
568 methods, but is beyond the scope of this document.</para>
572 <literal>cpu_pattern="N 0[0-3] 1[4-7]</literal>
574 <para>Create two CPTs, with CPT0 containing all CPUs in NUMA
575 node[0-3], while CPT1 contains all CPUs in NUMA node [4-7].</para>
578 <para>The current configuration of the CPU partition can be read via
579 <literal>lctl get_parm cpu_partition_table</literal>. For example,
580 a simple 4-core system has a single CPT with all four CPU cores:
581 <screen>$ lctl get_param cpu_partition_table
582 cpu_partition_table=0 : 0 1 2 3</screen>
583 while a larger NUMA system with four 12-core CPUs may have four CPTs:
584 <screen>$ lctl get_param cpu_partition_table
586 0 : 0 1 2 3 4 5 6 7 8 9 10 11
587 1 : 12 13 14 15 16 17 18 19 20 21 22 23
588 2 : 24 25 26 27 28 29 30 31 32 33 34 35
589 3 : 36 37 38 39 40 41 42 43 44 45 46 47
594 <section xml:id="dbdoclet.lndtuning">
597 <primary>tuning</primary>
598 <secondary>LND tuning</secondary>
599 </indexterm>LND Tuning</title>
600 <para>LND tuning allows the number of threads per CPU partition to be
601 specified. An administrator can set the threads for both
602 <literal>ko2iblnd</literal> and
603 <literal>ksocklnd</literal> using the
604 <literal>nscheds</literal> parameter. This adjusts the number of threads for
605 each partition, not the overall number of threads on the LND.</para>
607 <para>The default number of threads for
608 <literal>ko2iblnd</literal> and
609 <literal>ksocklnd</literal> are automatically set and are chosen to
610 work well across a number of typical scenarios, for systems with both
611 high and low core counts.</para>
614 <title>ko2iblnd Tuning</title>
615 <para>The following table outlines the ko2iblnd module parameters to be used
617 <informaltable frame="all">
619 <colspec colname="c1" colwidth="50*" />
620 <colspec colname="c2" colwidth="50*" />
621 <colspec colname="c3" colwidth="50*" />
626 <emphasis role="bold">Module Parameter</emphasis>
631 <emphasis role="bold">Default Value</emphasis>
636 <emphasis role="bold">Description</emphasis>
645 <literal>service</literal>
650 <literal>987</literal>
654 <para>Service number (within RDMA_PS_TCP).</para>
660 <literal>cksum</literal>
669 <para>Set non-zero to enable message (not RDMA) checksums.</para>
675 <literal>timeout</literal>
680 <literal>50</literal>
684 <para>Timeout in seconds.</para>
690 <literal>nscheds</literal>
699 <para>Number of threads in each scheduler pool (per CPT). Value of
700 zero means we derive the number from the number of cores.</para>
706 <literal>conns_per_peer</literal>
711 <literal>4 (OmniPath), 1 (Everything else)</literal>
715 <para>Introduced in 2.10. Number of connections to each peer. Messages
716 are sent round-robin over the connection pool. Provides significant
717 improvement with OmniPath.</para>
723 <literal>ntx</literal>
728 <literal>512</literal>
732 <para>Number of message descriptors allocated for each pool at
733 startup. Grows at runtime. Shared by all CPTs.</para>
739 <literal>credits</literal>
744 <literal>256</literal>
748 <para>Number of concurrent sends on network.</para>
754 <literal>peer_credits</literal>
763 <para>Number of concurrent sends to 1 peer. Related/limited by IB
770 <literal>peer_credits_hiw</literal>
779 <para>When eagerly to return credits.</para>
785 <literal>peer_buffer_credits</literal>
794 <para>Number per-peer router buffer credits.</para>
800 <literal>peer_timeout</literal>
805 <literal>180</literal>
809 <para>Seconds without aliveness news to declare peer dead (less than
810 or equal to 0 to disable).</para>
816 <literal>ipif_name</literal>
821 <literal>ib0</literal>
825 <para>IPoIB interface name.</para>
831 <literal>retry_count</literal>
840 <para>Retransmissions when no ACK received.</para>
846 <literal>rnr_retry_count</literal>
855 <para>RNR retransmissions.</para>
861 <literal>keepalive</literal>
866 <literal>100</literal>
870 <para>Idle time in seconds before sending a keepalive.</para>
876 <literal>ib_mtu</literal>
885 <para>IB MTU 256/512/1024/2048/4096.</para>
891 <literal>concurrent_sends</literal>
900 <para>Send work-queue sizing. If zero, derived from
901 <literal>map_on_demand</literal> and <literal>peer_credits</literal>.
908 <literal>map_on_demand</literal>
913 <literal>0 (pre-4.8 Linux) 1 (4.8 Linux onward) 32 (OmniPath)</literal>
917 <para>Number of fragments reserved for connection. If zero, use
918 global memory region (found to be security issue). If non-zero, use
919 FMR or FastReg for memory registration. Value needs to agree between
920 both peers of connection.</para>
926 <literal>fmr_pool_size</literal>
931 <literal>512</literal>
935 <para>Size of fmr pool on each CPT (>= ntx / 4). Grows at runtime.
942 <literal>fmr_flush_trigger</literal>
947 <literal>384</literal>
951 <para>Number dirty FMRs that triggers pool flush.</para>
957 <literal>fmr_cache</literal>
966 <para>Non-zero to enable FMR caching.</para>
972 <literal>dev_failover</literal>
981 <para>HCA failover for bonding (0 OFF, 1 ON, other values reserved).
988 <literal>require_privileged_port</literal>
997 <para>Require privileged port when accepting connection.</para>
1003 <literal>use_privileged_port</literal>
1008 <literal>1</literal>
1012 <para>Use privileged port when initiating connection.</para>
1018 <literal>wrq_sge</literal>
1023 <literal>2</literal>
1027 <para>Introduced in 2.10. Number scatter/gather element groups per
1028 work request. Used to deal with fragmentations which can consume
1029 double the number of work requests.</para>
1037 <section xml:id="dbdoclet.nrstuning">
1040 <primary>tuning</primary>
1041 <secondary>Network Request Scheduler (NRS) Tuning</secondary>
1042 </indexterm>Network Request Scheduler (NRS) Tuning</title>
1043 <para>The Network Request Scheduler (NRS) allows the administrator to
1044 influence the order in which RPCs are handled at servers, on a per-PTLRPC
1045 service basis, by providing different policies that can be activated and
1046 tuned in order to influence the RPC ordering. The aim of this is to provide
1047 for better performance, and possibly discrete performance characteristics
1048 using future policies.</para>
1049 <para>The NRS policy state of a PTLRPC service can be read and set via the
1050 <literal>{service}.nrs_policies</literal> tunable. To read a PTLRPC
1051 service's NRS policy state, run:</para>
1053 lctl get_param {service}.nrs_policies
1055 <para>For example, to read the NRS policy state of the
1056 <literal>ost_io</literal> service, run:</para>
1058 $ lctl get_param ost.OSS.ost_io.nrs_policies
1059 ost.OSS.ost_io.nrs_policies=
1098 high_priority_requests:
1136 <para>NRS policy state is shown in either one or two sections, depending on
1137 the PTLRPC service being queried. The first section is named
1138 <literal>regular_requests</literal> and is available for all PTLRPC
1139 services, optionally followed by a second section which is named
1140 <literal>high_priority_requests</literal>. This is because some PTLRPC
1141 services are able to treat some types of RPCs as higher priority ones, such
1142 that they are handled by the server with higher priority compared to other,
1143 regular RPC traffic. For PTLRPC services that do not support high-priority
1144 RPCs, you will only see the
1145 <literal>regular_requests</literal> section.</para>
1146 <para>There is a separate instance of each NRS policy on each PTLRPC
1147 service for handling regular and high-priority RPCs (if the service
1148 supports high-priority RPCs). For each policy instance, the following
1149 fields are shown:</para>
1150 <informaltable frame="all">
1152 <colspec colname="c1" colwidth="50*" />
1153 <colspec colname="c2" colwidth="50*" />
1158 <emphasis role="bold">Field</emphasis>
1163 <emphasis role="bold">Description</emphasis>
1172 <literal>name</literal>
1176 <para>The name of the policy.</para>
1182 <literal>state</literal>
1186 <para>The state of the policy; this can be any of
1187 <literal>invalid, stopping, stopped, starting, started</literal>.
1188 A fully enabled policy is in the
1189 <literal>started</literal> state.</para>
1195 <literal>fallback</literal>
1199 <para>Whether the policy is acting as a fallback policy or not. A
1200 fallback policy is used to handle RPCs that other enabled
1201 policies fail to handle, or do not support the handling of. The
1203 <literal>no, yes</literal>. Currently, only the FIFO policy can
1204 act as a fallback policy.</para>
1210 <literal>queued</literal>
1214 <para>The number of RPCs that the policy has waiting to be
1221 <literal>active</literal>
1225 <para>The number of RPCs that the policy is currently
1232 <para>To enable an NRS policy on a PTLRPC service run:</para>
1234 lctl set_param {service}.nrs_policies=
1235 <replaceable>policy_name</replaceable>
1237 <para>This will enable the policy
1238 <replaceable>policy_name</replaceable>for both regular and high-priority
1239 RPCs (if the PLRPC service supports high-priority RPCs) on the given
1240 service. For example, to enable the CRR-N NRS policy for the ldlm_cbd
1241 service, run:</para>
1243 $ lctl set_param ldlm.services.ldlm_cbd.nrs_policies=crrn
1244 ldlm.services.ldlm_cbd.nrs_policies=crrn
1247 <para>For PTLRPC services that support high-priority RPCs, you can also
1249 <replaceable>reg|hp</replaceable>token, in order to enable an NRS policy
1250 for handling only regular or high-priority RPCs on a given PTLRPC service,
1253 lctl set_param {service}.nrs_policies="
1254 <replaceable>policy_name</replaceable>
1255 <replaceable>reg|hp</replaceable>"
1257 <para>For example, to enable the TRR policy for handling only regular, but
1258 not high-priority RPCs on the
1259 <literal>ost_io</literal> service, run:</para>
1261 $ lctl set_param ost.OSS.ost_io.nrs_policies="trr reg"
1262 ost.OSS.ost_io.nrs_policies="trr reg"
1266 <para>When enabling an NRS policy, the policy name must be given in
1267 lower-case characters, otherwise the operation will fail with an error
1273 <primary>tuning</primary>
1274 <secondary>Network Request Scheduler (NRS) Tuning</secondary>
1275 <tertiary>first in, first out (FIFO) policy</tertiary>
1276 </indexterm>First In, First Out (FIFO) policy</title>
1277 <para>The first in, first out (FIFO) policy handles RPCs in a service in
1278 the same order as they arrive from the LNet layer, so no special
1279 processing takes place to modify the RPC handling stream. FIFO is the
1280 default policy for all types of RPCs on all PTLRPC services, and is
1281 always enabled irrespective of the state of other policies, so that it
1282 can be used as a backup policy, in case a more elaborate policy that has
1283 been enabled fails to handle an RPC, or does not support handling a given
1285 <para>The FIFO policy has no tunables that adjust its behaviour.</para>
1290 <primary>tuning</primary>
1291 <secondary>Network Request Scheduler (NRS) Tuning</secondary>
1292 <tertiary>client round-robin over NIDs (CRR-N) policy</tertiary>
1293 </indexterm>Client Round-Robin over NIDs (CRR-N) policy</title>
1294 <para>The client round-robin over NIDs (CRR-N) policy performs batched
1295 round-robin scheduling of all types of RPCs, with each batch consisting
1296 of RPCs originating from the same client node, as identified by its NID.
1297 CRR-N aims to provide for better resource utilization across the cluster,
1298 and to help shorten completion times of jobs in some cases, by
1299 distributing available bandwidth more evenly across all clients.</para>
1300 <para>The CRR-N policy can be enabled on all types of PTLRPC services,
1301 and has the following tunable that can be used to adjust its
1306 <literal>{service}.nrs_crrn_quantum</literal>
1309 <literal>{service}.nrs_crrn_quantum</literal> tunable determines the
1310 maximum allowed size of each batch of RPCs; the unit of measure is in
1311 number of RPCs. To read the maximum allowed batch size of a CRR-N
1314 lctl get_param {service}.nrs_crrn_quantum
1316 <para>For example, to read the maximum allowed batch size of a CRR-N
1317 policy on the ost_io service, run:</para>
1319 $ lctl get_param ost.OSS.ost_io.nrs_crrn_quantum
1320 ost.OSS.ost_io.nrs_crrn_quantum=reg_quantum:16
1324 <para>You can see that there is a separate maximum allowed batch size
1326 <literal>reg_quantum</literal>) and high-priority (
1327 <literal>hp_quantum</literal>) RPCs (if the PTLRPC service supports
1328 high-priority RPCs).</para>
1329 <para>To set the maximum allowed batch size of a CRR-N policy on a
1330 given service, run:</para>
1332 lctl set_param {service}.nrs_crrn_quantum=
1333 <replaceable>1-65535</replaceable>
1335 <para>This will set the maximum allowed batch size on a given
1336 service, for both regular and high-priority RPCs (if the PLRPC
1337 service supports high-priority RPCs), to the indicated value.</para>
1338 <para>For example, to set the maximum allowed batch size on the
1339 ldlm_canceld service to 16 RPCs, run:</para>
1341 $ lctl set_param ldlm.services.ldlm_canceld.nrs_crrn_quantum=16
1342 ldlm.services.ldlm_canceld.nrs_crrn_quantum=16
1345 <para>For PTLRPC services that support high-priority RPCs, you can
1346 also specify a different maximum allowed batch size for regular and
1347 high-priority RPCs, by running:</para>
1349 $ lctl set_param {service}.nrs_crrn_quantum=
1350 <replaceable>reg_quantum|hp_quantum</replaceable>:
1351 <replaceable>1-65535</replaceable>"
1353 <para>For example, to set the maximum allowed batch size on the
1354 ldlm_canceld service, for high-priority RPCs to 32, run:</para>
1356 $ lctl set_param ldlm.services.ldlm_canceld.nrs_crrn_quantum="hp_quantum:32"
1357 ldlm.services.ldlm_canceld.nrs_crrn_quantum=hp_quantum:32
1360 <para>By using the last method, you can also set the maximum regular
1361 and high-priority RPC batch sizes to different values, in a single
1362 command invocation.</para>
1369 <primary>tuning</primary>
1370 <secondary>Network Request Scheduler (NRS) Tuning</secondary>
1371 <tertiary>object-based round-robin (ORR) policy</tertiary>
1372 </indexterm>Object-based Round-Robin (ORR) policy</title>
1373 <para>The object-based round-robin (ORR) policy performs batched
1374 round-robin scheduling of bulk read write (brw) RPCs, with each batch
1375 consisting of RPCs that pertain to the same backend-file system object,
1376 as identified by its OST FID.</para>
1377 <para>The ORR policy is only available for use on the ost_io service. The
1378 RPC batches it forms can potentially consist of mixed bulk read and bulk
1379 write RPCs. The RPCs in each batch are ordered in an ascending manner,
1380 based on either the file offsets, or the physical disk offsets of each
1381 RPC (only applicable to bulk read RPCs).</para>
1382 <para>The aim of the ORR policy is to provide for increased bulk read
1383 throughput in some cases, by ordering bulk read RPCs (and potentially
1384 bulk write RPCs), and thus minimizing costly disk seek operations.
1385 Performance may also benefit from any resulting improvement in resource
1386 utilization, or by taking advantage of better locality of reference
1387 between RPCs.</para>
1388 <para>The ORR policy has the following tunables that can be used to
1389 adjust its behaviour:</para>
1393 <literal>ost.OSS.ost_io.nrs_orr_quantum</literal>
1396 <literal>ost.OSS.ost_io.nrs_orr_quantum</literal> tunable determines
1397 the maximum allowed size of each batch of RPCs; the unit of measure
1398 is in number of RPCs. To read the maximum allowed batch size of the
1399 ORR policy, run:</para>
1401 $ lctl get_param ost.OSS.ost_io.nrs_orr_quantum
1402 ost.OSS.ost_io.nrs_orr_quantum=reg_quantum:256
1406 <para>You can see that there is a separate maximum allowed batch size
1408 <literal>reg_quantum</literal>) and high-priority (
1409 <literal>hp_quantum</literal>) RPCs (if the PTLRPC service supports
1410 high-priority RPCs).</para>
1411 <para>To set the maximum allowed batch size for the ORR policy,
1414 $ lctl set_param ost.OSS.ost_io.nrs_orr_quantum=
1415 <replaceable>1-65535</replaceable>
1417 <para>This will set the maximum allowed batch size for both regular
1418 and high-priority RPCs, to the indicated value.</para>
1419 <para>You can also specify a different maximum allowed batch size for
1420 regular and high-priority RPCs, by running:</para>
1422 $ lctl set_param ost.OSS.ost_io.nrs_orr_quantum=
1423 <replaceable>reg_quantum|hp_quantum</replaceable>:
1424 <replaceable>1-65535</replaceable>
1426 <para>For example, to set the maximum allowed batch size for regular
1427 RPCs to 128, run:</para>
1429 $ lctl set_param ost.OSS.ost_io.nrs_orr_quantum=reg_quantum:128
1430 ost.OSS.ost_io.nrs_orr_quantum=reg_quantum:128
1433 <para>By using the last method, you can also set the maximum regular
1434 and high-priority RPC batch sizes to different values, in a single
1435 command invocation.</para>
1439 <literal>ost.OSS.ost_io.nrs_orr_offset_type</literal>
1442 <literal>ost.OSS.ost_io.nrs_orr_offset_type</literal> tunable
1443 determines whether the ORR policy orders RPCs within each batch based
1444 on logical file offsets or physical disk offsets. To read the offset
1445 type value for the ORR policy, run:</para>
1447 $ lctl get_param ost.OSS.ost_io.nrs_orr_offset_type
1448 ost.OSS.ost_io.nrs_orr_offset_type=reg_offset_type:physical
1449 hp_offset_type:logical
1452 <para>You can see that there is a separate offset type value for
1454 <literal>reg_offset_type</literal>) and high-priority (
1455 <literal>hp_offset_type</literal>) RPCs.</para>
1456 <para>To set the ordering type for the ORR policy, run:</para>
1458 $ lctl set_param ost.OSS.ost_io.nrs_orr_offset_type=
1459 <replaceable>physical|logical</replaceable>
1461 <para>This will set the offset type for both regular and
1462 high-priority RPCs, to the indicated value.</para>
1463 <para>You can also specify a different offset type for regular and
1464 high-priority RPCs, by running:</para>
1466 $ lctl set_param ost.OSS.ost_io.nrs_orr_offset_type=
1467 <replaceable>reg_offset_type|hp_offset_type</replaceable>:
1468 <replaceable>physical|logical</replaceable>
1470 <para>For example, to set the offset type for high-priority RPCs to
1471 physical disk offsets, run:</para>
1473 $ lctl set_param ost.OSS.ost_io.nrs_orr_offset_type=hp_offset_type:physical
1474 ost.OSS.ost_io.nrs_orr_offset_type=hp_offset_type:physical
1476 <para>By using the last method, you can also set offset type for
1477 regular and high-priority RPCs to different values, in a single
1478 command invocation.</para>
1480 <para>Irrespective of the value of this tunable, only logical
1481 offsets can, and are used for ordering bulk write RPCs.</para>
1486 <literal>ost.OSS.ost_io.nrs_orr_supported</literal>
1489 <literal>ost.OSS.ost_io.nrs_orr_supported</literal> tunable determines
1490 the type of RPCs that the ORR policy will handle. To read the types
1491 of supported RPCs by the ORR policy, run:</para>
1493 $ lctl get_param ost.OSS.ost_io.nrs_orr_supported
1494 ost.OSS.ost_io.nrs_orr_supported=reg_supported:reads
1495 hp_supported=reads_and_writes
1498 <para>You can see that there is a separate supported 'RPC types'
1500 <literal>reg_supported</literal>) and high-priority (
1501 <literal>hp_supported</literal>) RPCs.</para>
1502 <para>To set the supported RPC types for the ORR policy, run:</para>
1504 $ lctl set_param ost.OSS.ost_io.nrs_orr_supported=
1505 <replaceable>reads|writes|reads_and_writes</replaceable>
1507 <para>This will set the supported RPC types for both regular and
1508 high-priority RPCs, to the indicated value.</para>
1509 <para>You can also specify a different supported 'RPC types' value
1510 for regular and high-priority RPCs, by running:</para>
1512 $ lctl set_param ost.OSS.ost_io.nrs_orr_supported=
1513 <replaceable>reg_supported|hp_supported</replaceable>:
1514 <replaceable>reads|writes|reads_and_writes</replaceable>
1516 <para>For example, to set the supported RPC types to bulk read and
1517 bulk write RPCs for regular requests, run:</para>
1520 ost.OSS.ost_io.nrs_orr_supported=reg_supported:reads_and_writes
1521 ost.OSS.ost_io.nrs_orr_supported=reg_supported:reads_and_writes
1524 <para>By using the last method, you can also set the supported RPC
1525 types for regular and high-priority RPC to different values, in a
1526 single command invocation.</para>
1533 <primary>tuning</primary>
1534 <secondary>Network Request Scheduler (NRS) Tuning</secondary>
1535 <tertiary>Target-based round-robin (TRR) policy</tertiary>
1536 </indexterm>Target-based Round-Robin (TRR) policy</title>
1537 <para>The target-based round-robin (TRR) policy performs batched
1538 round-robin scheduling of brw RPCs, with each batch consisting of RPCs
1539 that pertain to the same OST, as identified by its OST index.</para>
1540 <para>The TRR policy is identical to the object-based round-robin (ORR)
1541 policy, apart from using the brw RPC's target OST index instead of the
1542 backend-fs object's OST FID, for determining the RPC scheduling order.
1543 The goals of TRR are effectively the same as for ORR, and it uses the
1544 following tunables to adjust its behaviour:</para>
1548 <literal>ost.OSS.ost_io.nrs_trr_quantum</literal>
1550 <para>The purpose of this tunable is exactly the same as for the
1551 <literal>ost.OSS.ost_io.nrs_orr_quantum</literal> tunable for the ORR
1552 policy, and you can use it in exactly the same way.</para>
1556 <literal>ost.OSS.ost_io.nrs_trr_offset_type</literal>
1558 <para>The purpose of this tunable is exactly the same as for the
1559 <literal>ost.OSS.ost_io.nrs_orr_offset_type</literal> tunable for the
1560 ORR policy, and you can use it in exactly the same way.</para>
1564 <literal>ost.OSS.ost_io.nrs_trr_supported</literal>
1566 <para>The purpose of this tunable is exactly the same as for the
1567 <literal>ost.OSS.ost_io.nrs_orr_supported</literal> tunable for the
1568 ORR policy, and you can use it in exactly the sme way.</para>
1572 <section xml:id="dbdoclet.tbftuning" condition='l26'>
1575 <primary>tuning</primary>
1576 <secondary>Network Request Scheduler (NRS) Tuning</secondary>
1577 <tertiary>Token Bucket Filter (TBF) policy</tertiary>
1578 </indexterm>Token Bucket Filter (TBF) policy</title>
1579 <para>The TBF (Token Bucket Filter) is a Lustre NRS policy which enables
1580 Lustre services to enforce the RPC rate limit on clients/jobs for QoS
1581 (Quality of Service) purposes.</para>
1583 <title>The internal structure of TBF policy</title>
1586 <imagedata scalefit="1" width="50%"
1587 fileref="figures/TBF_policy.png" />
1590 <phrase>The internal structure of TBF policy</phrase>
1594 <para>When a RPC request arrives, TBF policy puts it to a waiting queue
1595 according to its classification. The classification of RPC requests is
1596 based on either NID or JobID of the RPC according to the configure of
1597 TBF. TBF policy maintains multiple queues in the system, one queue for
1598 each category in the classification of RPC requests. The requests waits
1599 for tokens in the FIFO queue before they have been handled so as to keep
1600 the RPC rates under the limits.</para>
1601 <para>When Lustre services are too busy to handle all of the requests in
1602 time, all of the specified rates of the queues will not be satisfied.
1603 Nothing bad will happen except some of the RPC rates are slower than
1604 configured. In this case, the queue with higher rate will have an
1605 advantage over the queues with lower rates, but none of them will be
1607 <para>To manage the RPC rate of queues, we don't need to set the rate of
1608 each queue manually. Instead, we define rules which TBF policy matches to
1609 determine RPC rate limits. All of the defined rules are organized as an
1610 ordered list. Whenever a queue is newly created, it goes though the rule
1611 list and takes the first matched rule as its rule, so that the queue
1612 knows its RPC token rate. A rule can be added to or removed from the list
1613 at run time. Whenever the list of rules is changed, the queues will
1614 update their matched rules.</para>
1615 <section remap="h4">
1616 <title>Enable TBF policy</title>
1617 <para>Command:</para>
1618 <screen>lctl set_param ost.OSS.ost_io.nrs_policies="tbf <<replaceable>policy</replaceable>>"
1620 <para>For now, the RPCs can be classified into the different types
1621 according to their NID, JOBID, OPCode and UID/GID. When enabling TBF
1622 policy, you can specify one of the types, or just use "tbf" to enable
1623 all of them to do a fine-grained RPC requests classification.</para>
1624 <para>Example:</para>
1625 <screen>$ lctl set_param ost.OSS.ost_io.nrs_policies="tbf"
1626 $ lctl set_param ost.OSS.ost_io.nrs_policies="tbf nid"
1627 $ lctl set_param ost.OSS.ost_io.nrs_policies="tbf jobid"
1628 $ lctl set_param ost.OSS.ost_io.nrs_policies="tbf opcode"
1629 $ lctl set_param ost.OSS.ost_io.nrs_policies="tbf uid"
1630 $ lctl set_param ost.OSS.ost_io.nrs_policies="tbf gid"</screen>
1632 <section remap="h4">
1633 <title>Start a TBF rule</title>
1634 <para>The TBF rule is defined in the parameter
1635 <literal>ost.OSS.ost_io.nrs_tbf_rule</literal>.</para>
1636 <para>Command:</para>
1637 <screen>lctl set_param x.x.x.nrs_tbf_rule=
1638 "[reg|hp] start <replaceable>rule_name</replaceable> <replaceable>arguments</replaceable>..."
1640 <para>'<replaceable>rule_name</replaceable>' is a string of the TBF
1641 policy rule's name and '<replaceable>arguments</replaceable>' is a
1642 string to specify the detailed rule according to the different types.
1645 <para>Next, the different types of TBF policies will be described.</para>
1647 <para><emphasis role="bold">NID based TBF policy</emphasis></para>
1648 <para>Command:</para>
1649 <screen>lctl set_param x.x.x.nrs_tbf_rule=
1650 "[reg|hp] start <replaceable>rule_name</replaceable> nid={<replaceable>nidlist</replaceable>} rate=<replaceable>rate</replaceable>"
1652 <para>'<replaceable>nidlist</replaceable>' uses the same format
1653 as configuring LNET route. '<replaceable>rate</replaceable>' is
1654 the (upper limit) RPC rate of the rule.</para>
1655 <para>Example:</para>
1656 <screen>$ lctl set_param ost.OSS.ost_io.nrs_tbf_rule=\
1657 "start other_clients nid={192.168.*.*@tcp} rate=50"
1658 $ lctl set_param ost.OSS.ost_io.nrs_tbf_rule=\
1659 "start computes nid={192.168.1.[2-128]@tcp} rate=500"
1660 $ lctl set_param ost.OSS.ost_io.nrs_tbf_rule=\
1661 "start loginnode nid={192.168.1.1@tcp} rate=100"</screen>
1662 <para>In this example, the rate of processing RPC requests from
1663 compute nodes is at most 5x as fast as those from login nodes.
1664 The output of <literal>ost.OSS.ost_io.nrs_tbf_rule</literal> is
1666 <screen>lctl get_param ost.OSS.ost_io.nrs_tbf_rule
1667 ost.OSS.ost_io.nrs_tbf_rule=
1670 loginnode {192.168.1.1@tcp} 100, ref 0
1671 computes {192.168.1.[2-128]@tcp} 500, ref 0
1672 other_clients {192.168.*.*@tcp} 50, ref 0
1673 default {*} 10000, ref 0
1674 high_priority_requests:
1676 loginnode {192.168.1.1@tcp} 100, ref 0
1677 computes {192.168.1.[2-128]@tcp} 500, ref 0
1678 other_clients {192.168.*.*@tcp} 50, ref 0
1679 default {*} 10000, ref 0</screen>
1680 <para>Also, the rule can be written in <literal>reg</literal> and
1681 <literal>hp</literal> formats:</para>
1682 <screen>$ lctl set_param ost.OSS.ost_io.nrs_tbf_rule=\
1683 "reg start loginnode nid={192.168.1.1@tcp} rate=100"
1684 $ lctl set_param ost.OSS.ost_io.nrs_tbf_rule=\
1685 "hp start loginnode nid={192.168.1.1@tcp} rate=100"</screen>
1688 <para><emphasis role="bold">JobID based TBF policy</emphasis></para>
1689 <para>For the JobID, please see
1690 <xref xmlns:xlink="http://www.w3.org/1999/xlink"
1691 linkend="dbdoclet.jobstats" /> for more details.</para>
1692 <para>Command:</para>
1693 <screen>lctl set_param x.x.x.nrs_tbf_rule=
1694 "[reg|hp] start <replaceable>rule_name</replaceable> jobid={<replaceable>jobid_list</replaceable>} rate=<replaceable>rate</replaceable>"
1696 <para>Wildcard is supported in
1697 {<replaceable>jobid_list</replaceable>}.</para>
1698 <para>Example:</para>
1699 <screen>$ lctl set_param ost.OSS.ost_io.nrs_tbf_rule=\
1700 "start iozone_user jobid={iozone.500} rate=100"
1701 $ lctl set_param ost.OSS.ost_io.nrs_tbf_rule=\
1702 "start dd_user jobid={dd.*} rate=50"
1703 $ lctl set_param ost.OSS.ost_io.nrs_tbf_rule=\
1704 "start user1 jobid={*.600} rate=10"
1705 $ lctl set_param ost.OSS.ost_io.nrs_tbf_rule=\
1706 "start user2 jobid={io*.10* *.500} rate=200"</screen>
1707 <para>Also, the rule can be written in <literal>reg</literal> and
1708 <literal>hp</literal> formats:</para>
1709 <screen>$ lctl set_param ost.OSS.ost_io.nrs_tbf_rule=\
1710 "hp start iozone_user1 jobid={iozone.500} rate=100"
1711 $ lctl set_param ost.OSS.ost_io.nrs_tbf_rule=\
1712 "reg start iozone_user1 jobid={iozone.500} rate=100"</screen>
1715 <para><emphasis role="bold">Opcode based TBF policy</emphasis></para>
1716 <para>Command:</para>
1717 <screen>$ lctl set_param x.x.x.nrs_tbf_rule=
1718 "[reg|hp] start <replaceable>rule_name</replaceable> opcode={<replaceable>opcode_list</replaceable>} rate=<replaceable>rate</replaceable>"
1720 <para>Example:</para>
1721 <screen>$ lctl set_param ost.OSS.ost_io.nrs_tbf_rule=\
1722 "start user1 opcode={ost_read} rate=100"
1723 $ lctl set_param ost.OSS.ost_io.nrs_tbf_rule=\
1724 "start iozone_user1 opcode={ost_read ost_write} rate=200"</screen>
1725 <para>Also, the rule can be written in <literal>reg</literal> and
1726 <literal>hp</literal> formats:</para>
1727 <screen>$ lctl set_param ost.OSS.ost_io.nrs_tbf_rule=\
1728 "hp start iozone_user1 opcode={ost_read} rate=100"
1729 $ lctl set_param ost.OSS.ost_io.nrs_tbf_rule=\
1730 "reg start iozone_user1 opcode={ost_read} rate=100"</screen>
1733 <para><emphasis role="bold">UID/GID based TBF policy</emphasis></para>
1734 <para>Command:</para>
1735 <screen>$ lctl set_param ost.OSS.*.nrs_tbf_rule=\
1736 "[reg][hp] start <replaceable>rule_name</replaceable> uid={<replaceable>uid</replaceable>} rate=<replaceable>rate</replaceable>"
1737 $ lctl set_param ost.OSS.*.nrs_tbf_rule=\
1738 "[reg][hp] start <replaceable>rule_name</replaceable> gid={<replaceable>gid</replaceable>} rate=<replaceable>rate</replaceable>"</screen>
1739 <para>Exapmle:</para>
1740 <para>Limit the rate of RPC requests of the uid 500</para>
1741 <screen>$ lctl set_param ost.OSS.*.nrs_tbf_rule=\
1742 "start tbf_name uid={500} rate=100"</screen>
1743 <para>Limit the rate of RPC requests of the gid 500</para>
1744 <screen>$ lctl set_param ost.OSS.*.nrs_tbf_rule=\
1745 "start tbf_name gid={500} rate=100"</screen>
1746 <para>Also, you can use the following rule to control all reqs
1748 <para>Start the tbf uid QoS on MDS:</para>
1749 <screen>$ lctl set_param mds.MDS.*.nrs_policies="tbf uid"</screen>
1750 <para>Limit the rate of RPC requests of the uid 500</para>
1751 <screen>$ lctl set_param mds.MDS.*.nrs_tbf_rule=\
1752 "start tbf_name uid={500} rate=100"</screen>
1755 <para><emphasis role="bold">Policy combination</emphasis></para>
1756 <para>To support TBF rules with complex expressions of conditions,
1757 TBF classifier is extented to classify RPC in a more fine-grained
1758 way. This feature supports logical conditional conjunction and
1759 disjunction operations among different types.
1761 "&" represents the conditional conjunction and
1762 "," represents the conditional disjunction.</para>
1763 <para>Example:</para>
1764 <screen>$ lctl set_param ost.OSS.ost_io.nrs_tbf_rule=\
1765 "start comp_rule opcode={ost_write}&jobid={dd.0},\
1766 nid={192.168.1.[1-128]@tcp 0@lo} rate=100"</screen>
1767 <para>In this example, those RPCs whose <literal>opcode</literal> is
1768 ost_write and <literal>jobid</literal> is dd.0, or
1769 <literal>nid</literal> satisfies the condition of
1770 {192.168.1.[1-128]@tcp 0@lo} will be processed at the rate of 100
1772 The output of <literal>ost.OSS.ost_io.nrs_tbf_rule</literal>is like:
1774 <screen>$ lctl get_param ost.OSS.ost_io.nrs_tbf_rule
1775 ost.OSS.ost_io.nrs_tbf_rule=
1778 comp_rule opcode={ost_write}&jobid={dd.0},nid={192.168.1.[1-128]@tcp 0@lo} 100, ref 0
1779 default * 10000, ref 0
1781 comp_rule opcode={ost_write}&jobid={dd.0},nid={192.168.1.[1-128]@tcp 0@lo} 100, ref 0
1782 default * 10000, ref 0
1783 high_priority_requests:
1785 comp_rule opcode={ost_write}&jobid={dd.0},nid={192.168.1.[1-128]@tcp 0@lo} 100, ref 0
1786 default * 10000, ref 0
1788 comp_rule opcode={ost_write}&jobid={dd.0},nid={192.168.1.[1-128]@tcp 0@lo} 100, ref 0
1789 default * 10000, ref 0</screen>
1790 <para>Example:</para>
1791 <screen>$ lctl set_param ost.OSS.*.nrs_tbf_rule=\
1792 "start tbf_name uid={500}&gid={500} rate=100"</screen>
1793 <para>In this example, those RPC requests whose uid is 500 and
1794 gid is 500 will be processed at the rate of 100 req/sec.</para>
1798 <section remap="h4">
1799 <title>Change a TBF rule</title>
1800 <para>Command:</para>
1801 <screen>lctl set_param x.x.x.nrs_tbf_rule=
1802 "[reg|hp] change <replaceable>rule_name</replaceable> rate=<replaceable>rate</replaceable>"
1804 <para>Example:</para>
1805 <screen>$ lctl set_param ost.OSS.ost_io.nrs_tbf_rule=\
1806 "change loginnode rate=200"
1807 $ lctl set_param ost.OSS.ost_io.nrs_tbf_rule=\
1808 "reg change loginnode rate=200"
1809 $ lctl set_param ost.OSS.ost_io.nrs_tbf_rule=\
1810 "hp change loginnode rate=200"
1813 <section remap="h4">
1814 <title>Stop a TBF rule</title>
1815 <para>Command:</para>
1816 <screen>lctl set_param x.x.x.nrs_tbf_rule="[reg|hp] stop
1817 <replaceable>rule_name</replaceable>"</screen>
1818 <para>Example:</para>
1819 <screen>$ lctl set_param ost.OSS.ost_io.nrs_tbf_rule="stop loginnode"
1820 $ lctl set_param ost.OSS.ost_io.nrs_tbf_rule="reg stop loginnode"
1821 $ lctl set_param ost.OSS.ost_io.nrs_tbf_rule="hp stop loginnode"</screen>
1823 <section remap="h4">
1824 <title>Rule options</title>
1825 <para>To support more flexible rule conditions, the following options
1829 <para><emphasis role="bold">Reordering of TBF rules</emphasis></para>
1830 <para>By default, a newly started rule is prior to the old ones,
1831 but by specifying the argument '<literal>rank=</literal>' when
1832 inserting a new rule with "<literal>start</literal>" command,
1833 the rank of the rule can be changed. Also, it can be changed by
1834 "<literal>change</literal>" command.
1836 <para>Command:</para>
1837 <screen>lctl set_param ost.OSS.ost_io.nrs_tbf_rule=
1838 "start <replaceable>rule_name</replaceable> <replaceable>arguments</replaceable>... rank=<replaceable>obj_rule_name</replaceable>"
1839 lctl set_param ost.OSS.ost_io.nrs_tbf_rule=
1840 "change <replaceable>rule_name</replaceable> rate=<replaceable>rate</replaceable> rank=<replaceable>obj_rule_name</replaceable>"
1842 <para>By specifying the existing rule
1843 '<replaceable>obj_rule_name</replaceable>', the new rule
1844 '<replaceable>rule_name</replaceable>' will be moved to the front of
1845 '<replaceable>obj_rule_name</replaceable>'.</para>
1846 <para>Example:</para>
1847 <screen>$ lctl set_param ost.OSS.ost_io.nrs_tbf_rule=\
1848 "start computes nid={192.168.1.[2-128]@tcp} rate=500"
1849 $ lctl set_param ost.OSS.ost_io.nrs_tbf_rule=\
1850 "start user1 jobid={iozone.500 dd.500} rate=100"
1851 $ lctl set_param ost.OSS.ost_io.nrs_tbf_rule=\
1852 "start iozone_user1 opcode={ost_read ost_write} rate=200 rank=computes"</screen>
1853 <para>In this example, rule "iozone_user1" is added to the front of
1854 rule "computes". We can see the order by the following command:
1856 <screen>$ lctl get_param ost.OSS.ost_io.nrs_tbf_rule
1857 ost.OSS.ost_io.nrs_tbf_rule=
1860 user1 jobid={iozone.500 dd.500} 100, ref 0
1861 iozone_user1 opcode={ost_read ost_write} 200, ref 0
1862 computes nid={192.168.1.[2-128]@tcp} 500, ref 0
1863 default * 10000, ref 0
1865 user1 jobid={iozone.500 dd.500} 100, ref 0
1866 iozone_user1 opcode={ost_read ost_write} 200, ref 0
1867 computes nid={192.168.1.[2-128]@tcp} 500, ref 0
1868 default * 10000, ref 0
1869 high_priority_requests:
1871 user1 jobid={iozone.500 dd.500} 100, ref 0
1872 iozone_user1 opcode={ost_read ost_write} 200, ref 0
1873 computes nid={192.168.1.[2-128]@tcp} 500, ref 0
1874 default * 10000, ref 0
1876 user1 jobid={iozone.500 dd.500} 100, ref 0
1877 iozone_user1 opcode={ost_read ost_write} 200, ref 0
1878 computes nid={192.168.1.[2-128]@tcp} 500, ref 0
1879 default * 10000, ref 0</screen>
1882 <para><emphasis role="bold">TBF realtime policies under congestion
1884 <para>During TBF evaluation, we find that when the sum of I/O
1885 bandwidth requirements for all classes exceeds the system capacity,
1886 the classes with the same rate limits get less bandwidth than if
1887 preconfigured evenly. The reason for this is the heavy load on a
1888 congested server will result in some missed deadlines for some
1889 classes. The number of the calculated tokens may be larger than 1
1890 during dequeuing. In the original implementation, all classes are
1891 equally handled to simply discard exceeding tokens.</para>
1892 <para>Thus, a Hard Token Compensation (HTC) strategy has been
1893 implemented. A class can be configured with the HTC feature by the
1894 rule it matches. This feature means that requests in this kind of
1895 class queues have high real-time requirements and that the bandwidth
1896 assignment must be satisfied as good as possible. When deadline
1897 misses happen, the class keeps the deadline unchanged and the time
1898 residue(the remainder of elapsed time divided by 1/r) is compensated
1899 to the next round. This ensures that the next idle I/O thread will
1900 always select this class to serve until all accumulated exceeding
1901 tokens are handled or there are no pending requests in the class
1903 <para>Command:</para>
1904 <para>A new command format is added to enable the realtime feature
1906 <screen>lctl set_param x.x.x.nrs_tbf_rule=\
1907 "start <replaceable>rule_name</replaceable> <replaceable>arguments</replaceable>... realtime=1</screen>
1908 <para>Example:</para>
1909 <screen>$ lctl set_param ost.OSS.ost_io.nrs_tbf_rule=
1910 "start realjob jobid={dd.0} rate=100 realtime=1</screen>
1911 <para>This example rule means the RPC requests whose JobID is dd.0
1912 will be processed at the rate of 100req/sec in realtime.</para>
1917 <section xml:id="dbdoclet.delaytuning" condition='l2A'>
1920 <primary>tuning</primary>
1921 <secondary>Network Request Scheduler (NRS) Tuning</secondary>
1922 <tertiary>Delay policy</tertiary>
1923 </indexterm>Delay policy</title>
1924 <para>The NRS Delay policy seeks to perturb the timing of request
1925 processing at the PtlRPC layer, with the goal of simulating high server
1926 load, and finding and exposing timing related problems. When this policy
1927 is active, upon arrival of a request the policy will calculate an offset,
1928 within a defined, user-configurable range, from the request arrival
1929 time, to determine a time after which the request should be handled.
1930 The request is then stored using the cfs_binheap implementation,
1931 which sorts the request according to the assigned start time.
1932 Requests are removed from the binheap for handling once their start
1933 time has been passed.</para>
1934 <para>The Delay policy can be enabled on all types of PtlRPC services,
1935 and has the following tunables that can be used to adjust its behavior:
1940 <literal>{service}.nrs_delay_min</literal>
1943 <literal>{service}.nrs_delay_min</literal> tunable controls the
1944 minimum amount of time, in seconds, that a request will be delayed by
1945 this policy. The default is 5 seconds. To read this value run:</para>
1947 lctl get_param {service}.nrs_delay_min</screen>
1948 <para>For example, to read the minimum delay set on the ost_io
1949 service, run:</para>
1951 $ lctl get_param ost.OSS.ost_io.nrs_delay_min
1952 ost.OSS.ost_io.nrs_delay_min=reg_delay_min:5
1953 hp_delay_min:5</screen>
1954 <para>To set the minimum delay in RPC processing, run:</para>
1956 lctl set_param {service}.nrs_delay_min=<replaceable>0-65535</replaceable></screen>
1957 <para>This will set the minimum delay time on a given service, for both
1958 regular and high-priority RPCs (if the PtlRPC service supports
1959 high-priority RPCs), to the indicated value.</para>
1960 <para>For example, to set the minimum delay time on the ost_io service
1963 $ lctl set_param ost.OSS.ost_io.nrs_delay_min=10
1964 ost.OSS.ost_io.nrs_delay_min=10</screen>
1965 <para>For PtlRPC services that support high-priority RPCs, to set a
1966 different minimum delay time for regular and high-priority RPCs, run:
1969 lctl set_param {service}.nrs_delay_min=<replaceable>reg_delay_min|hp_delay_min</replaceable>:<replaceable>0-65535</replaceable>
1971 <para>For example, to set the minimum delay time on the ost_io service
1972 for high-priority RPCs to 3, run:</para>
1974 $ lctl set_param ost.OSS.ost_io.nrs_delay_min=hp_delay_min:3
1975 ost.OSS.ost_io.nrs_delay_min=hp_delay_min:3</screen>
1976 <para>Note, in all cases the minimum delay time cannot exceed the
1977 maximum delay time.</para>
1981 <literal>{service}.nrs_delay_max</literal>
1984 <literal>{service}.nrs_delay_max</literal> tunable controls the
1985 maximum amount of time, in seconds, that a request will be delayed by
1986 this policy. The default is 300 seconds. To read this value run:
1988 <screen>lctl get_param {service}.nrs_delay_max</screen>
1989 <para>For example, to read the maximum delay set on the ost_io
1990 service, run:</para>
1992 $ lctl get_param ost.OSS.ost_io.nrs_delay_max
1993 ost.OSS.ost_io.nrs_delay_max=reg_delay_max:300
1994 hp_delay_max:300</screen>
1995 <para>To set the maximum delay in RPC processing, run:</para>
1996 <screen>lctl set_param {service}.nrs_delay_max=<replaceable>0-65535</replaceable>
1998 <para>This will set the maximum delay time on a given service, for both
1999 regular and high-priority RPCs (if the PtlRPC service supports
2000 high-priority RPCs), to the indicated value.</para>
2001 <para>For example, to set the maximum delay time on the ost_io service
2004 $ lctl set_param ost.OSS.ost_io.nrs_delay_max=60
2005 ost.OSS.ost_io.nrs_delay_max=60</screen>
2006 <para>For PtlRPC services that support high-priority RPCs, to set a
2007 different maximum delay time for regular and high-priority RPCs, run:
2009 <screen>lctl set_param {service}.nrs_delay_max=<replaceable>reg_delay_max|hp_delay_max</replaceable>:<replaceable>0-65535</replaceable></screen>
2010 <para>For example, to set the maximum delay time on the ost_io service
2011 for high-priority RPCs to 30, run:</para>
2013 $ lctl set_param ost.OSS.ost_io.nrs_delay_max=hp_delay_max:30
2014 ost.OSS.ost_io.nrs_delay_max=hp_delay_max:30</screen>
2015 <para>Note, in all cases the maximum delay time cannot be less than the
2016 minimum delay time.</para>
2020 <literal>{service}.nrs_delay_pct</literal>
2023 <literal>{service}.nrs_delay_pct</literal> tunable controls the
2024 percentage of requests that will be delayed by this policy. The
2025 default is 100. Note, when a request is not selected for handling by
2026 the delay policy due to this variable then the request will be handled
2027 by whatever fallback policy is defined for that service. If no other
2028 fallback policy is defined then the request will be handled by the
2029 FIFO policy. To read this value run:</para>
2030 <screen>lctl get_param {service}.nrs_delay_pct</screen>
2031 <para>For example, to read the percentage of requests being delayed on
2032 the ost_io service, run:</para>
2034 $ lctl get_param ost.OSS.ost_io.nrs_delay_pct
2035 ost.OSS.ost_io.nrs_delay_pct=reg_delay_pct:100
2036 hp_delay_pct:100</screen>
2037 <para>To set the percentage of delayed requests, run:</para>
2039 lctl set_param {service}.nrs_delay_pct=<replaceable>0-100</replaceable></screen>
2040 <para>This will set the percentage of requests delayed on a given
2041 service, for both regular and high-priority RPCs (if the PtlRPC service
2042 supports high-priority RPCs), to the indicated value.</para>
2043 <para>For example, to set the percentage of delayed requests on the
2044 ost_io service to 50, run:</para>
2046 $ lctl set_param ost.OSS.ost_io.nrs_delay_pct=50
2047 ost.OSS.ost_io.nrs_delay_pct=50
2049 <para>For PtlRPC services that support high-priority RPCs, to set a
2050 different delay percentage for regular and high-priority RPCs, run:
2052 <screen>lctl set_param {service}.nrs_delay_pct=<replaceable>reg_delay_pct|hp_delay_pct</replaceable>:<replaceable>0-100</replaceable>
2054 <para>For example, to set the percentage of delayed requests on the
2055 ost_io service for high-priority RPCs to 5, run:</para>
2056 <screen>$ lctl set_param ost.OSS.ost_io.nrs_delay_pct=hp_delay_pct:5
2057 ost.OSS.ost_io.nrs_delay_pct=hp_delay_pct:5
2063 <section xml:id="dbdoclet.50438272_25884">
2066 <primary>tuning</primary>
2067 <secondary>lockless I/O</secondary>
2068 </indexterm>Lockless I/O Tunables</title>
2069 <para>The lockless I/O tunable feature allows servers to ask clients to do
2070 lockless I/O (the server does the locking on behalf of clients) for
2071 contended files to avoid lock ping-pong.</para>
2072 <para>The lockless I/O patch introduces these tunables:</para>
2076 <emphasis role="bold">OST-side:</emphasis>
2079 ldlm.namespaces.filter-<replaceable>fsname</replaceable>-*.
2082 <literal>contended_locks</literal>- If the number of lock conflicts in
2083 the scan of granted and waiting queues at contended_locks is exceeded,
2084 the resource is considered to be contended.</para>
2086 <literal>contention_seconds</literal>- The resource keeps itself in a
2087 contended state as set in the parameter.</para>
2089 <literal>max_nolock_bytes</literal>- Server-side locking set only for
2090 requests less than the blocks set in the
2091 <literal>max_nolock_bytes</literal> parameter. If this tunable is
2092 set to zero (0), it disables server-side locking for read/write
2097 <emphasis role="bold">Client-side:</emphasis>
2100 /proc/fs/lustre/llite/lustre-*
2103 <literal>contention_seconds</literal>-
2104 <literal>llite</literal> inode remembers its contended state for the
2105 time specified in this parameter.</para>
2109 <emphasis role="bold">Client-side statistics:</emphasis>
2112 <literal>/proc/fs/lustre/llite/lustre-*/stats</literal> file has new
2113 rows for lockless I/O statistics.</para>
2115 <literal>lockless_read_bytes</literal> and
2116 <literal>lockless_write_bytes</literal>- To count the total bytes read
2117 or written, the client makes its own decisions based on the request
2118 size. The client does not communicate with the server if the request
2119 size is smaller than the
2120 <literal>min_nolock_size</literal>, without acquiring locks by the
2125 <section condition="l29">
2128 <primary>tuning</primary>
2129 <secondary>with lfs ladvise</secondary>
2131 Server-Side Advice and Hinting
2133 <section><title>Overview</title>
2134 <para>Use the <literal>lfs ladvise</literal> command to give file access
2135 advices or hints to servers.</para>
2136 <screen>lfs ladvise [--advice|-a ADVICE ] [--background|-b]
2137 [--start|-s START[kMGT]]
2138 {[--end|-e END[kMGT]] | [--length|-l LENGTH[kMGT]]}
2139 <emphasis>file</emphasis> ...
2142 <informaltable frame="all">
2144 <colspec colname="c1" colwidth="50*"/>
2145 <colspec colname="c2" colwidth="50*"/>
2149 <para><emphasis role="bold">Option</emphasis></para>
2152 <para><emphasis role="bold">Description</emphasis></para>
2159 <para><literal>-a</literal>, <literal>--advice=</literal>
2160 <literal>ADVICE</literal></para>
2163 <para>Give advice or hint of type <literal>ADVICE</literal>.
2164 Advice types are:</para>
2165 <para><literal>willread</literal> to prefetch data into server
2167 <para><literal>dontneed</literal> to cleanup data cache on
2169 <para><literal>lockahead</literal> Request an LDLM extent lock
2170 of the given mode on the given byte range </para>
2171 <para><literal>noexpand</literal> Disable extent lock expansion
2172 behavior for I/O to this file descriptor</para>
2177 <para><literal>-b</literal>, <literal>--background</literal>
2181 <para>Enable the advices to be sent and handled asynchronously.
2187 <para><literal>-s</literal>, <literal>--start=</literal>
2188 <literal>START_OFFSET</literal></para>
2191 <para>File range starts from <literal>START_OFFSET</literal>
2197 <para><literal>-e</literal>, <literal>--end=</literal>
2198 <literal>END_OFFSET</literal></para>
2201 <para>File range ends at (not including)
2202 <literal>END_OFFSET</literal>. This option may not be
2203 specified at the same time as the <literal>-l</literal>
2209 <para><literal>-l</literal>, <literal>--length=</literal>
2210 <literal>LENGTH</literal></para>
2213 <para>File range has length of <literal>LENGTH</literal>.
2214 This option may not be specified at the same time as the
2215 <literal>-e</literal> option.</para>
2220 <para><literal>-m</literal>, <literal>--mode=</literal>
2221 <literal>MODE</literal></para>
2224 <para>Lockahead request mode <literal>{READ,WRITE}</literal>.
2225 Request a lock with this mode.</para>
2232 <para>Typically, <literal>lfs ladvise</literal> forwards the advice to
2233 Lustre servers without guaranteeing when and what servers will react to
2234 the advice. Actions may or may not triggered when the advices are
2235 recieved, depending on the type of the advice, as well as the real-time
2236 decision of the affected server-side components.</para>
2237 <para>A typical usage of ladvise is to enable applications and users with
2238 external knowledge to intervene in server-side cache management. For
2239 example, if a bunch of different clients are doing small random reads of a
2240 file, prefetching pages into OSS cache with big linear reads before the
2241 random IO is a net benefit. Fetching that data into each client cache with
2242 fadvise() may not be, due to much more data being sent to the client.
2245 <literal>ladvise lockahead</literal> is different in that it attempts to
2246 control LDLM locking behavior by explicitly requesting LDLM locks in
2247 advance of use. This does not directly affect caching behavior, instead
2248 it is used in special cases to avoid pathological results (lock exchange)
2249 from the normal LDLM locking behavior.
2252 Note that the <literal>noexpand</literal> advice works on a specific
2253 file descriptor, so using it via lfs has no effect. It must be used
2254 on a particular file descriptor which is used for i/o to have any effect.
2256 <para>The main difference between the Linux <literal>fadvise()</literal>
2257 system call and <literal>lfs ladvise</literal> is that
2258 <literal>fadvise()</literal> is only a client side mechanism that does
2259 not pass the advice to the filesystem, while <literal>ladvise</literal>
2260 can send advices or hints to the Lustre server side.</para>
2262 <section><title>Examples</title>
2263 <para>The following example gives the OST(s) holding the first 1GB of
2264 <literal>/mnt/lustre/file1</literal>a hint that the first 1GB of the
2265 file will be read soon.</para>
2266 <screen>client1$ lfs ladvise -a willread -s 0 -e 1048576000 /mnt/lustre/file1
2268 <para>The following example gives the OST(s) holding the first 1GB of
2269 <literal>/mnt/lustre/file1</literal> a hint that the first 1GB of file
2270 will not be read in the near future, thus the OST(s) could clear the
2271 cache of the file in the memory.</para>
2272 <screen>client1$ lfs ladvise -a dontneed -s 0 -e 1048576000 /mnt/lustre/file1
2274 <para>The following example requests an LDLM read lock on the first
2275 1 MiB of <literal>/mnt/lustre/file1</literal>. This will attempt to
2276 request a lock from the OST holding that region of the file.</para>
2277 <screen>client1$ lfs ladvise -a lockahead -m READ -s 0 -e 1M /mnt/lustre/file1
2279 <para>The following example requests an LDLM write lock on
2280 [3 MiB, 10 MiB] of <literal>/mnt/lustre/file1</literal>. This will
2281 attempt to request a lock from the OST holding that region of the
2283 <screen>client1$ lfs ladvise -a lockahead -m WRITE -s 3M -e 10M /mnt/lustre/file1
2287 <section condition="l29">
2290 <primary>tuning</primary>
2291 <secondary>Large Bulk IO</secondary>
2293 Large Bulk IO (16MB RPC)
2295 <section><title>Overview</title>
2296 <para>Beginning with Lustre 2.9, Lustre is extended to support RPCs up
2297 to 16MB in size. By enabling a larger RPC size, fewer RPCs will be
2298 required to transfer the same amount of data between clients and
2299 servers. With a larger RPC size, the OSS can submit more data to the
2300 underlying disks at once, therefore it can produce larger disk I/Os
2301 to fully utilize the increasing bandwidth of disks.</para>
2302 <para>At client connection time, clients will negotiate with
2303 servers what the maximum RPC size it is possible to use, but the
2304 client can always send RPCs smaller than this maximum.</para>
2305 <para>The parameter <literal>brw_size</literal> is used on the OST
2306 to tell the client the maximum (preferred) IO size. All clients that
2307 talk to this target should never send an RPC greater than this size.
2308 Clients can individually set a smaller RPC size limit via the
2309 <literal>osc.*.max_pages_per_rpc</literal> tunable.
2312 <para>The smallest <literal>brw_size</literal> that can be set for
2313 ZFS OSTs is the <literal>recordsize</literal> of that dataset. This
2314 ensures that the client can always write a full ZFS file block if it
2315 has enough dirty data, and does not otherwise force it to do read-
2316 modify-write operations for every RPC.
2320 <section><title>Usage</title>
2321 <para>In order to enable a larger RPC size,
2322 <literal>brw_size</literal> must be changed to an IO size value up to
2323 16MB. To temporarily change <literal>brw_size</literal>, the
2324 following command should be run on the OSS:</para>
2325 <screen>oss# lctl set_param obdfilter.<replaceable>fsname</replaceable>-OST*.brw_size=16</screen>
2326 <para>To persistently change <literal>brw_size</literal>, the
2327 following command should be run:</para>
2328 <screen>oss# lctl set_param -P obdfilter.<replaceable>fsname</replaceable>-OST*.brw_size=16</screen>
2329 <para>When a client connects to an OST target, it will fetch
2330 <literal>brw_size</literal> from the target and pick the maximum value
2331 of <literal>brw_size</literal> and its local setting for
2332 <literal>max_pages_per_rpc</literal> as the actual RPC size.
2333 Therefore, the <literal>max_pages_per_rpc</literal> on the client side
2334 would have to be set to 16M, or 4096 if the PAGESIZE is 4KB, to enable
2335 a 16MB RPC. To temporarily make the change, the following command
2336 should be run on the client to set
2337 <literal>max_pages_per_rpc</literal>:</para>
2338 <screen>client$ lctl set_param osc.<replaceable>fsname</replaceable>-OST*.max_pages_per_rpc=16M</screen>
2339 <para>To persistently make this change, the following command should
2341 <screen>client$ lctl set_param -P obdfilter.<replaceable>fsname</replaceable>-OST*.osc.max_pages_per_rpc=16M</screen>
2342 <caution><para>The <literal>brw_size</literal> of an OST can be
2343 changed on the fly. However, clients have to be remounted to
2344 renegotiate the new maximum RPC size.</para></caution>
2347 <section xml:id="dbdoclet.50438272_80545">
2350 <primary>tuning</primary>
2351 <secondary>for small files</secondary>
2352 </indexterm>Improving Lustre I/O Performance for Small Files</title>
2353 <para>An environment where an application writes small file chunks from
2354 many clients to a single file can result in poor I/O performance. To
2355 improve the performance of the Lustre file system with small files:</para>
2358 <para>Have the application aggregate writes some amount before
2359 submitting them to the Lustre file system. By default, the Lustre
2360 software enforces POSIX coherency semantics, so it results in lock
2361 ping-pong between client nodes if they are all writing to the same
2362 file at one time.</para>
2363 <para>Using MPI-IO Collective Write functionality in
2364 the Lustre ADIO driver is one way to achieve this in a straight
2365 forward manner if the application is already using MPI-IO.</para>
2368 <para>Have the application do 4kB
2369 <literal>O_DIRECT</literal> sized I/O to the file and disable locking
2370 on the output file. This avoids partial-page IO submissions and, by
2371 disabling locking, you avoid contention between clients.</para>
2374 <para>Have the application write contiguous data.</para>
2377 <para>Add more disks or use SSD disks for the OSTs. This dramatically
2378 improves the IOPS rate. Consider creating larger OSTs rather than many
2379 smaller OSTs due to less overhead (journal, connections, etc).</para>
2382 <para>Use RAID-1+0 OSTs instead of RAID-5/6. There is RAID parity
2383 overhead for writing small chunks of data to disk.</para>
2387 <section xml:id="dbdoclet.50438272_45406">
2390 <primary>tuning</primary>
2391 <secondary>write performance</secondary>
2392 </indexterm>Understanding Why Write Performance is Better Than Read
2394 <para>Typically, the performance of write operations on a Lustre cluster is
2395 better than read operations. When doing writes, all clients are sending
2396 write RPCs asynchronously. The RPCs are allocated, and written to disk in
2397 the order they arrive. In many cases, this allows the back-end storage to
2398 aggregate writes efficiently.</para>
2399 <para>In the case of read operations, the reads from clients may come in a
2400 different order and need a lot of seeking to get read from the disk. This
2401 noticeably hampers the read throughput.</para>
2402 <para>Currently, there is no readahead on the OSTs themselves, though the
2403 clients do readahead. If there are lots of clients doing reads it would not
2404 be possible to do any readahead in any case because of memory consumption
2405 (consider that even a single RPC (1 MB) readahead for 1000 clients would
2406 consume 1 GB of RAM).</para>
2407 <para>For file systems that use socklnd (TCP, Ethernet) as interconnect,
2408 there is also additional CPU overhead because the client cannot receive
2409 data without copying it from the network buffers. In the write case, the
2410 client CAN send data without the additional data copy. This means that the
2411 client is more likely to become CPU-bound during reads than writes.</para>
2415 vim:expandtab:shiftwidth=2:tabstop=8: