service immediately and disables automatic thread creation behavior.
</para>
</note>
- <para condition='l23'>Lustre software release 2.3 introduced new
- parameters to provide more control to administrators.</para>
+ <para>Parameters are available to provide administrators control
+ over the number of service threads.</para>
<itemizedlist>
<listitem>
<para>
</itemizedlist>
</section>
</section>
- <section xml:id="dbdoclet.mdsbinding" condition='l23'>
+ <section xml:id="dbdoclet.mdsbinding">
<title>
<indexterm>
<primary>tuning</primary>
<secondary>MDS binding</secondary>
</indexterm>Binding MDS Service Thread to CPU Partitions</title>
- <para>With the introduction of Node Affinity (
- <xref linkend="nodeaffdef" />) in Lustre software release 2.3, MDS threads
- can be bound to particular CPU partitions (CPTs) to improve CPU cache
- usage and memory locality. Default values for CPT counts and CPU core
+ <para>With the Node Affinity (<xref linkend="nodeaffdef" />) feature,
+ MDS threads can be bound to particular CPU partitions (CPTs) to improve CPU
+ cache usage and memory locality. Default values for CPT counts and CPU core
bindings are selected automatically to provide good overall performance for
a given CPU count. However, an administrator can deviate from these setting
if they choose. For details on specifying the mapping of CPU cores to
<para>By default, this parameter is off. As always, you should test the
performance to compare the impact of changing this parameter.</para>
</section>
- <section condition='l23'>
+ <section>
<title>
<indexterm>
<primary>tuning</primary>
<secondary>Network interface binding</secondary>
</indexterm>Binding Network Interface Against CPU Partitions</title>
- <para>Lustre software release 2.3 and beyond provide enhanced network
- interface control. The enhancement means that an administrator can bind
- an interface to one or more CPU partitions. Bindings are specified as
- options to the LNet modules. For more information on specifying module
- options, see
+ <para>Lustre allows enhanced network interface control. This means that
+ an administrator can bind an interface to one or more CPU partitions.
+ Bindings are specified as options to the LNet modules. For more
+ information on specifying module options, see
<xref linkend="dbdoclet.50438293_15350" /></para>
<para>For example,
<literal>o2ib0(ib0)[0,1]</literal> will ensure that all messages for
<screen>
ko2iblnd credits=256
</screen>
- <note condition="l23">
- <para>In Lustre software release 2.3 and beyond, LNet may revalidate
- the NI credits, so the administrator's request may not persist.</para>
+ <note>
+ <para>LNet may revalidate the NI credits, so the administrator's
+ request may not persist.</para>
</note>
</section>
<section>
<screen>
lnet large_router_buffers=8192
</screen>
- <note condition="l23">
- <para>In Lustre software release 2.3 and beyond, LNet may revalidate
- the router buffer setting, so the administrator's request may not
- persist.</para>
+ <note>
+ <para>LNet may revalidate the router buffer setting, so the
+ administrator's request may not persist.</para>
</note>
</section>
<section>
be MAX.</para>
</section>
</section>
- <section xml:id="dbdoclet.libcfstuning" condition='l23'>
+ <section xml:id="dbdoclet.libcfstuning">
<title>
<indexterm>
<primary>tuning</primary>
<secondary>libcfs</secondary>
</indexterm>libcfs Tuning</title>
- <para>Lustre software release 2.3 introduced binding service threads via
- CPU Partition Tables (CPTs). This allows the system administrator to
- fine-tune on which CPU cores the Lustre service threads are run, for both
- OSS and MDS services, as well as on the client.
+ <para>Lustre allows binding service threads via CPU Partition Tables
+ (CPTs). This allows the system administrator to fine-tune on which CPU
+ cores the Lustre service threads are run, for both OSS and MDS services,
+ as well as on the client.
</para>
<para>CPTs are useful to reserve some cores on the OSS or MDS nodes for
system functions such as system monitoring, HA heartbeat, or similar
16MB. To temporarily change <literal>brw_size</literal>, the
following command should be run on the OSS:</para>
<screen>oss# lctl set_param obdfilter.<replaceable>fsname</replaceable>-OST*.brw_size=16</screen>
- <para>To persistently change <literal>brw_size</literal>, one of the following
- commands should be run on the OSS:</para>
+ <para>To persistently change <literal>brw_size</literal>, the
+ following command should be run:</para>
<screen>oss# lctl set_param -P obdfilter.<replaceable>fsname</replaceable>-OST*.brw_size=16</screen>
- <screen>oss# lctl conf_param <replaceable>fsname</replaceable>-OST*.obdfilter.brw_size=16</screen>
<para>When a client connects to an OST target, it will fetch
<literal>brw_size</literal> from the target and pick the maximum value
of <literal>brw_size</literal> and its local setting for
<screen>client$ lctl set_param osc.<replaceable>fsname</replaceable>-OST*.max_pages_per_rpc=16M</screen>
<para>To persistently make this change, the following command should
be run:</para>
- <screen>client$ lctl conf_param <replaceable>fsname</replaceable>-OST*.osc.max_pages_per_rpc=16M</screen>
+ <screen>client$ lctl set_param -P obdfilter.<replaceable>fsname</replaceable>-OST*.osc.max_pages_per_rpc=16M</screen>
<caution><para>The <literal>brw_size</literal> of an OST can be
changed on the fly. However, clients have to be remounted to
renegotiate the new maximum RPC size.</para></caution>