service immediately and disables automatic thread creation behavior.
</para>
</note>
- <para condition='l23'>Lustre software release 2.3 introduced new
- parameters to provide more control to administrators.</para>
+ <para>Parameters are available to provide administrators control
+ over the number of service threads.</para>
<itemizedlist>
<listitem>
<para>
</itemizedlist>
</section>
</section>
- <section xml:id="dbdoclet.mdsbinding" condition='l23'>
+ <section xml:id="dbdoclet.mdsbinding">
<title>
<indexterm>
<primary>tuning</primary>
<secondary>MDS binding</secondary>
</indexterm>Binding MDS Service Thread to CPU Partitions</title>
- <para>With the introduction of Node Affinity (
- <xref linkend="nodeaffdef" />) in Lustre software release 2.3, MDS threads
- can be bound to particular CPU partitions (CPTs) to improve CPU cache
- usage and memory locality. Default values for CPT counts and CPU core
+ <para>With the Node Affinity (<xref linkend="nodeaffdef" />) feature,
+ MDS threads can be bound to particular CPU partitions (CPTs) to improve CPU
+ cache usage and memory locality. Default values for CPT counts and CPU core
bindings are selected automatically to provide good overall performance for
a given CPU count. However, an administrator can deviate from these setting
if they choose. For details on specifying the mapping of CPU cores to
<para>By default, this parameter is off. As always, you should test the
performance to compare the impact of changing this parameter.</para>
</section>
- <section condition='l23'>
+ <section>
<title>
<indexterm>
<primary>tuning</primary>
<secondary>Network interface binding</secondary>
</indexterm>Binding Network Interface Against CPU Partitions</title>
- <para>Lustre software release 2.3 and beyond provide enhanced network
- interface control. The enhancement means that an administrator can bind
- an interface to one or more CPU partitions. Bindings are specified as
- options to the LNet modules. For more information on specifying module
- options, see
+ <para>Lustre allows enhanced network interface control. This means that
+ an administrator can bind an interface to one or more CPU partitions.
+ Bindings are specified as options to the LNet modules. For more
+ information on specifying module options, see
<xref linkend="dbdoclet.50438293_15350" /></para>
<para>For example,
<literal>o2ib0(ib0)[0,1]</literal> will ensure that all messages for
<screen>
ko2iblnd credits=256
</screen>
- <note condition="l23">
- <para>In Lustre software release 2.3 and beyond, LNet may revalidate
- the NI credits, so the administrator's request may not persist.</para>
+ <note>
+ <para>LNet may revalidate the NI credits, so the administrator's
+ request may not persist.</para>
</note>
</section>
<section>
<screen>
lnet large_router_buffers=8192
</screen>
- <note condition="l23">
- <para>In Lustre software release 2.3 and beyond, LNet may revalidate
- the router buffer setting, so the administrator's request may not
- persist.</para>
+ <note>
+ <para>LNet may revalidate the router buffer setting, so the
+ administrator's request may not persist.</para>
</note>
</section>
<section>
be MAX.</para>
</section>
</section>
- <section xml:id="dbdoclet.libcfstuning" condition='l23'>
+ <section xml:id="dbdoclet.libcfstuning">
<title>
<indexterm>
<primary>tuning</primary>
<secondary>libcfs</secondary>
</indexterm>libcfs Tuning</title>
- <para>Lustre software release 2.3 introduced binding service threads via
- CPU Partition Tables (CPTs). This allows the system administrator to
- fine-tune on which CPU cores the Lustre service threads are run, for both
- OSS and MDS services, as well as on the client.
+ <para>Lustre allows binding service threads via CPU Partition Tables
+ (CPTs). This allows the system administrator to fine-tune on which CPU
+ cores the Lustre service threads are run, for both OSS and MDS services,
+ as well as on the client.
</para>
<para>CPTs are useful to reserve some cores on the OSS or MDS nodes for
system functions such as system monitoring, HA heartbeat, or similar
Server-Side Advice and Hinting
</title>
<section><title>Overview</title>
- <para>Use the <literal>lfs ladvise</literal> command give file access
+ <para>Use the <literal>lfs ladvise</literal> command to give file access
advices or hints to servers.</para>
<screen>lfs ladvise [--advice|-a ADVICE ] [--background|-b]
[--start|-s START[kMGT]]
cache</para>
<para><literal>dontneed</literal> to cleanup data cache on
server</para>
+ <para><literal>lockahead</literal> Request an LDLM extent lock
+ of the given mode on the given byte range </para>
+ <para><literal>noexpand</literal> Disable extent lock expansion
+ behavior for I/O to this file descriptor</para>
</entry>
</row>
<row>
<literal>-e</literal> option.</para>
</entry>
</row>
+ <row>
+ <entry>
+ <para><literal>-m</literal>, <literal>--mode=</literal>
+ <literal>MODE</literal></para>
+ </entry>
+ <entry>
+ <para>Lockahead request mode <literal>{READ,WRITE}</literal>.
+ Request a lock with this mode.</para>
+ </entry>
+ </row>
</tbody>
</tgroup>
</informaltable>
random IO is a net benefit. Fetching that data into each client cache with
fadvise() may not be, due to much more data being sent to the client.
</para>
+ <para>
+ <literal>ladvise lockahead</literal> is different in that it attempts to
+ control LDLM locking behavior by explicitly requesting LDLM locks in
+ advance of use. This does not directly affect caching behavior, instead
+ it is used in special cases to avoid pathological results (lock exchange)
+ from the normal LDLM locking behavior.
+ </para>
+ <para>
+ Note that the <literal>noexpand</literal> advice works on a specific
+ file descriptor, so using it via lfs has no effect. It must be used
+ on a particular file descriptor which is used for i/o to have any effect.
+ </para>
<para>The main difference between the Linux <literal>fadvise()</literal>
system call and <literal>lfs ladvise</literal> is that
<literal>fadvise()</literal> is only a client side mechanism that does
cache of the file in the memory.</para>
<screen>client1$ lfs ladvise -a dontneed -s 0 -e 1048576000 /mnt/lustre/file1
</screen>
+ <para>The following example requests an LDLM read lock on the first
+ 1 MiB of <literal>/mnt/lustre/file1</literal>. This will attempt to
+ request a lock from the OST holding that region of the file.</para>
+ <screen>client1$ lfs ladvise -a lockahead -m READ -s 0 -e 1M /mnt/lustre/file1
+ </screen>
+ <para>The following example requests an LDLM write lock on
+ [3 MiB, 10 MiB] of <literal>/mnt/lustre/file1</literal>. This will
+ attempt to request a lock from the OST holding that region of the
+ file.</para>
+ <screen>client1$ lfs ladvise -a lockahead -m WRITE -s 3M -e 10M /mnt/lustre/file1
+ </screen>
</section>
</section>
<section condition="l29">