Whamcloud - gitweb
FIX: converted to chapter and removed cruft
[doc/manual.git] / ManagingLNET.xml
1 <?xml version="1.0" encoding="UTF-8"?>
2 <chapter version="5.0" xml:lang="en-US" xmlns="http://docbook.org/ns/docbook" xmlns:xl="http://www.w3.org/1999/xlink">
3   <info>
4     <title>Managing Lustre Networking (LNET)</title>
5   </info>
6   <para><anchor xml:id="dbdoclet.50438203_pgfId-999824" xreflabel=""/>This chapter describes some tools for managing Lustre Networking (LNET) and includes the following sections:</para>
7   <itemizedlist><listitem>
8       <para><anchor xml:id="dbdoclet.50438203_pgfId-1286381" xreflabel=""/><link xl:href="ManagingLNET.html#50438203_51732">Updating the Health Status of a Peer or Router</link></para>
9     </listitem>
10 <listitem>
11       <para> </para>
12     </listitem>
13 <listitem>
14       <para><anchor xml:id="dbdoclet.50438203_pgfId-1287154" xreflabel=""/><link xl:href="ManagingLNET.html#50438203_48703">Starting and Stopping LNET</link></para>
15     </listitem>
16 <listitem>
17       <para> </para>
18     </listitem>
19 <listitem>
20       <para><anchor xml:id="dbdoclet.50438203_pgfId-1289983" xreflabel=""/><link xl:href="ManagingLNET.html#50438203_82542">Multi-Rail Configurations with LNET</link></para>
21     </listitem>
22 <listitem>
23       <para> </para>
24     </listitem>
25 <listitem>
26       <para><anchor xml:id="dbdoclet.50438203_pgfId-1290404" xreflabel=""/><link xl:href="ManagingLNET.html#50438203_78227">Load Balancing with InfiniBand</link></para>
27     </listitem>
28 <listitem>
29       <para> </para>
30     </listitem>
31 </itemizedlist>
32   <section remap="h2">
33     <title><anchor xml:id="dbdoclet.50438203_pgfId-1289878" xreflabel=""/></title>
34     <section remap="h2">
35       <title>15.1 <anchor xml:id="dbdoclet.50438203_51732" xreflabel=""/>Updating the Health Status of a Peer or <anchor xml:id="dbdoclet.50438203_marker-1288828" xreflabel=""/>Router</title>
36       <para><anchor xml:id="dbdoclet.50438203_pgfId-1287380" xreflabel=""/>There are two mechanisms to update the health status of a peer or a router:</para>
37       <itemizedlist><listitem>
38           <para><anchor xml:id="dbdoclet.50438203_pgfId-1287381" xreflabel=""/> LNET can actively check health status of all routers and mark them as dead or alive automatically. By default, this is off. To enable it set auto_down and if desired check_routers_before_use. This initial check may cause a pause equal to router_ping_timeout at system startup, if there are dead routers in the system.</para>
39         </listitem>
40 <listitem>
41           <para> </para>
42         </listitem>
43 <listitem>
44           <para><anchor xml:id="dbdoclet.50438203_pgfId-1287382" xreflabel=""/> When there is a communication error, all LNDs notify LNET that the peer (not necessarily a router) is down. This mechanism is always on, and there is no parameter to turn it off. However, if you set the LNET module parameter auto_down to 0, LNET ignores all such peer-down notifications.</para>
45         </listitem>
46 <listitem>
47           <para> </para>
48         </listitem>
49 </itemizedlist>
50       <para><anchor xml:id="dbdoclet.50438203_pgfId-1287383" xreflabel=""/>Several key differences in both mechanisms:</para>
51       <itemizedlist><listitem>
52           <para><anchor xml:id="dbdoclet.50438203_pgfId-1287384" xreflabel=""/> The router pinger only checks routers for their health, while LNDs notices all dead peers, regardless of whether they are a router or not.</para>
53         </listitem>
54 <listitem>
55           <para> </para>
56         </listitem>
57 <listitem>
58           <para><anchor xml:id="dbdoclet.50438203_pgfId-1287385" xreflabel=""/> The router pinger actively checks the router health by sending pings, but LNDs only notice a dead peer when there is network traffic going on.</para>
59         </listitem>
60 <listitem>
61           <para> </para>
62         </listitem>
63 <listitem>
64           <para><anchor xml:id="dbdoclet.50438203_pgfId-1287386" xreflabel=""/> The router pinger can bring a router from alive to dead or vice versa, but LNDs can only bring a peer down.</para>
65         </listitem>
66 <listitem>
67           <para> </para>
68         </listitem>
69 </itemizedlist>
70     </section>
71     <section remap="h2">
72       <title>15.2 <anchor xml:id="dbdoclet.50438203_48703" xreflabel=""/>Starting and Stopping LNET</title>
73       <para><anchor xml:id="dbdoclet.50438203_pgfId-1287399" xreflabel=""/>Lustre automatically starts and stops LNET, but it can also be manually started in a standalone manner. This is particularly useful to verify that your networking setup is working correctly before you attempt to start Lustre.</para>
74       <section remap="h3">
75         <title><anchor xml:id="dbdoclet.50438203_pgfId-1287402" xreflabel=""/>15.2.1 Starting <anchor xml:id="dbdoclet.50438203_marker-1287400" xreflabel=""/>LNET</title>
76         <para><anchor xml:id="dbdoclet.50438203_pgfId-1287403" xreflabel=""/>To start LNET, run:</para>
77         <screen><anchor xml:id="dbdoclet.50438203_pgfId-1287404" xreflabel=""/>$ modprobe lnet
78 <anchor xml:id="dbdoclet.50438203_pgfId-1287405" xreflabel=""/>$ lctl network up
79 </screen>
80         <para><anchor xml:id="dbdoclet.50438203_pgfId-1287406" xreflabel=""/>To see the list of local NIDs, run:</para>
81         <screen><anchor xml:id="dbdoclet.50438203_pgfId-1287407" xreflabel=""/>$ lctl list_nids
82 </screen>
83         <para><anchor xml:id="dbdoclet.50438203_pgfId-1289144" xreflabel=""/>This command tells you the network(s) configured to work with Lustre</para>
84         <para><anchor xml:id="dbdoclet.50438203_pgfId-1287409" xreflabel=""/>If the networks are not correctly setup, see the modules.conf &quot;networks=&quot; line and make sure the network layer modules are correctly installed and configured.</para>
85         <para><anchor xml:id="dbdoclet.50438203_pgfId-1287410" xreflabel=""/>To get the best remote NID, run:</para>
86         <screen><anchor xml:id="dbdoclet.50438203_pgfId-1287411" xreflabel=""/>$ lctl which_nid &lt;NID list&gt;
87 </screen>
88         <para><anchor xml:id="dbdoclet.50438203_pgfId-1287412" xreflabel=""/>where &lt;NID list&gt; is the list of available NIDs.</para>
89         <para><anchor xml:id="dbdoclet.50438203_pgfId-1287413" xreflabel=""/>This command takes the &quot;best&quot; NID from a list of the NIDs of a remote host. The &quot;best&quot; NID is the one that the local node uses when trying to communicate with the remote node.</para>
90         <section remap="h4">
91           <title><anchor xml:id="dbdoclet.50438203_pgfId-1287415" xreflabel=""/>15.2.1.1 <anchor xml:id="dbdoclet.50438203_46145" xreflabel=""/>Starting Clients</title>
92           <para><anchor xml:id="dbdoclet.50438203_pgfId-1287416" xreflabel=""/>To start a TCP client, run:</para>
93           <screen><anchor xml:id="dbdoclet.50438203_pgfId-1287417" xreflabel=""/>mount -t lustre mdsnode:/mdsA/client /mnt/lustre/
94 </screen>
95           <para><anchor xml:id="dbdoclet.50438203_pgfId-1287418" xreflabel=""/>To start an Elan client, run:</para>
96           <screen><anchor xml:id="dbdoclet.50438203_pgfId-1287419" xreflabel=""/>mount -t lustre 2@elan0:/mdsA/client /mnt/lustre
97 </screen>
98         </section>
99       </section>
100       <section remap="h3">
101         <title><anchor xml:id="dbdoclet.50438203_pgfId-1287473" xreflabel=""/>15.2.2 Stopping <anchor xml:id="dbdoclet.50438203_marker-1288543" xreflabel=""/>LNET</title>
102         <para><anchor xml:id="dbdoclet.50438203_pgfId-1287480" xreflabel=""/>Before the LNET modules can be removed, LNET references must be removed. In general, these references are removed automatically when Lustre is shut down, but for standalone routers, an explicit step is needed to stop LNET. Run:</para>
103         <screen><anchor xml:id="dbdoclet.50438203_pgfId-1287481" xreflabel=""/>lctl network unconfigure
104 </screen>
105         <informaltable frame="none">
106           <tgroup cols="1">
107             <colspec colname="c1" colwidth="100*"/>
108             <tbody>
109               <row>
110                 <entry><para><emphasis role="bold">Note -</emphasis><anchor xml:id="dbdoclet.50438203_pgfId-1287482" xreflabel=""/>Attempting to remove Lustre modules prior to stopping the network may result in a crash or an LNET hang. if this occurs, the node must be rebooted (in most cases). Make sure that the Lustre network and Lustre are stopped prior to unloading the modules. Be extremely careful using rmmod -f.</para></entry>
111               </row>
112             </tbody>
113           </tgroup>
114         </informaltable>
115         <para><anchor xml:id="dbdoclet.50438203_pgfId-1287486" xreflabel=""/>To unconfigure the LNET network, run:</para>
116         <screen><anchor xml:id="dbdoclet.50438203_pgfId-1287487" xreflabel=""/>modprobe -r &lt;any lnd and the lnet modules&gt;
117 </screen>
118         <informaltable frame="none">
119           <tgroup cols="1">
120             <colspec colname="c1" colwidth="100*"/>
121             <tbody>
122               <row>
123                 <entry><para><emphasis role="bold">Tip -</emphasis><anchor xml:id="dbdoclet.50438203_pgfId-1287488" xreflabel=""/>To remove all Lustre modules, run:</para><para>$ lctl modules | awk &apos;{print $2}&apos; | xargs rmmod</para></entry>
124               </row>
125             </tbody>
126           </tgroup>
127         </informaltable>
128       </section>
129     </section>
130     <section remap="h2">
131       <title>15.3 <anchor xml:id="dbdoclet.50438203_72197" xreflabel=""/><anchor xml:id="dbdoclet.50438203_82542" xreflabel=""/>Multi-Rail Configurations with <anchor xml:id="dbdoclet.50438203_Multi-rail-configurations-with-LNET-LNET" xreflabel=""/>LNET</title>
132       <para><anchor xml:id="dbdoclet.50438203_pgfId-1289933" xreflabel=""/>To aggregate bandwidth across both rails of a dual-rail IB cluster (o2iblnd) <footnote><para><anchor xml:id="dbdoclet.50438203_pgfId-1289932" xreflabel=""/>Multi-rail configurations are only supported by o2iblnd; other IB LNDs do not support multiple interfaces.</para></footnote> using LNET, consider these points:</para>
133       <itemizedlist><listitem>
134           <para><anchor xml:id="dbdoclet.50438203_pgfId-1289934" xreflabel=""/> LNET can work with multiple rails, however, it does not load balance across them. The actual rail used for any communication is determined by the peer NID.</para>
135         </listitem>
136 <listitem>
137           <para> </para>
138         </listitem>
139 <listitem>
140           <para><anchor xml:id="dbdoclet.50438203_pgfId-1289935" xreflabel=""/> Multi-rail LNET configurations do not provide an additional level of network fault tolerance. The configurations described below are for bandwidth aggregation only. Network interface failover is planned as an upcoming Lustre feature.</para>
141         </listitem>
142 <listitem>
143           <para> </para>
144         </listitem>
145 <listitem>
146           <para><anchor xml:id="dbdoclet.50438203_pgfId-1289936" xreflabel=""/> A Lustre node always uses the same local NID to communicate with a given peer NID. The criteria used to determine the local NID are:</para>
147         </listitem>
148 <listitem>
149           <itemizedlist><listitem>
150               <para><anchor xml:id="dbdoclet.50438203_pgfId-1289937" xreflabel=""/> Fewest hops (to minimize routing), and</para>
151             </listitem>
152 <listitem>
153               <para> </para>
154             </listitem>
155 <listitem>
156               <para><anchor xml:id="dbdoclet.50438203_pgfId-1289938" xreflabel=""/> Appears first in the &quot;networks&quot; or &quot;ip2nets&quot; LNET configuration strings</para>
157             </listitem>
158 <listitem>
159               <para> </para>
160             </listitem>
161 </itemizedlist>
162         </listitem>
163 </itemizedlist>
164     </section>
165     <section remap="h2">
166       <title>15.4 <anchor xml:id="dbdoclet.50438203_78227" xreflabel=""/>Load Balancing with InfiniBand</title>
167       <para><anchor xml:id="dbdoclet.50438203_pgfId-1290370" xreflabel=""/>A Lustre file system contains OSSs with two InfiniBand HCAs. Lustre clients have only one InfiniBand HCA using OFED Infiniband &apos;&apos;o2ib&apos;&apos; drivers. Load balancing between the HCAs on the OSS is accomplished through LNET.</para>
168       <section remap="h3">
169         <title><anchor xml:id="dbdoclet.50438203_pgfId-1290317" xreflabel=""/>15.4.1 Setting Up modprobe.conf<anchor xml:id="dbdoclet.50438203_77834" xreflabel=""/><anchor xml:id="dbdoclet.50438203_marker-1290316" xreflabel=""/> for Load Balancing</title>
170         <para><anchor xml:id="dbdoclet.50438203_pgfId-1290318" xreflabel=""/>To configure LNET for load balancing on clients and servers:</para>
171         <para><anchor xml:id="dbdoclet.50438203_pgfId-1290319" xreflabel=""/> 1. Set the modprobe.conf options.</para>
172         <para><anchor xml:id="dbdoclet.50438203_pgfId-1290320" xreflabel=""/>Depending on your configuration, set modprobe.conf options as follows:</para>
173         <itemizedlist><listitem>
174             <para><anchor xml:id="dbdoclet.50438203_pgfId-1290321" xreflabel=""/> Dual HCA OSS server</para>
175           </listitem>
176 <listitem>
177             <para> </para>
178           </listitem>
179 </itemizedlist>
180         <screen><anchor xml:id="dbdoclet.50438203_pgfId-1290322" xreflabel=""/>options lnet networks=&quot;o2ib0(ib0),o2ib1(ib1) 192.168.10.1.[101-102] 
181 </screen>
182         <itemizedlist><listitem>
183             <para><anchor xml:id="dbdoclet.50438203_pgfId-1290323" xreflabel=""/> Client with the odd IP address</para>
184           </listitem>
185 <listitem>
186             <para> </para>
187           </listitem>
188 </itemizedlist>
189         <screen><anchor xml:id="dbdoclet.50438203_pgfId-1290324" xreflabel=""/>options lnet networks=o2ib0(ib0) 192.168.10.[103-253/2] 
190 </screen>
191         <itemizedlist><listitem>
192             <para><anchor xml:id="dbdoclet.50438203_pgfId-1290325" xreflabel=""/> Client with the even IP address</para>
193           </listitem>
194 <listitem>
195             <para> </para>
196           </listitem>
197 </itemizedlist>
198         <screen><anchor xml:id="dbdoclet.50438203_pgfId-1290326" xreflabel=""/>options lnet networks=o2ib1(ib0) 192.168.10.[102-254/2]
199 </screen>
200         <para><anchor xml:id="dbdoclet.50438203_pgfId-1290327" xreflabel=""/> 2. Run the modprobe lnet command and create a combined MGS/MDT file system.</para>
201         <para><anchor xml:id="dbdoclet.50438203_pgfId-1290328" xreflabel=""/>The following commands create the MGS/MDT file system and mount the servers (MGS/MDT and OSS).</para>
202         <screen><anchor xml:id="dbdoclet.50438203_pgfId-1290329" xreflabel=""/>modprobe lnet
203 <anchor xml:id="dbdoclet.50438203_pgfId-1290330" xreflabel=""/>$ mkfs.lustre --fsname lustre --mgs --mdt &lt;block device name&gt;
204 <anchor xml:id="dbdoclet.50438203_pgfId-1290331" xreflabel=""/>$ mkdir -p &lt;mount point&gt;
205 <anchor xml:id="dbdoclet.50438203_pgfId-1290332" xreflabel=""/>$ mount -t lustre &lt;block device&gt; &lt;mount point&gt;
206 <anchor xml:id="dbdoclet.50438203_pgfId-1290333" xreflabel=""/>$ mount -t lustre &lt;block device&gt; &lt;mount point&gt;
207 <anchor xml:id="dbdoclet.50438203_pgfId-1290334" xreflabel=""/>$ mkfs.lustre --fsname lustre --mgs --mdt &lt;block device name&gt;
208 <anchor xml:id="dbdoclet.50438203_pgfId-1290335" xreflabel=""/>$ mkdir -p &lt;mount point&gt;
209 <anchor xml:id="dbdoclet.50438203_pgfId-1290336" xreflabel=""/>$ mount -t lustre &lt;block device&gt; &lt;mount point&gt;
210 <anchor xml:id="dbdoclet.50438203_pgfId-1290337" xreflabel=""/>$ mount -t lustre &lt;block device&gt; &lt;mount point&gt; 
211 </screen>
212         <para><anchor xml:id="dbdoclet.50438203_pgfId-1290338" xreflabel=""/>For example:</para>
213         <screen><anchor xml:id="dbdoclet.50438203_pgfId-1290339" xreflabel=""/>modprobe lnet
214 <anchor xml:id="dbdoclet.50438203_pgfId-1290340" xreflabel=""/>$ mkfs.lustre --fsname lustre --mdt --mgs /dev/sda
215 <anchor xml:id="dbdoclet.50438203_pgfId-1290341" xreflabel=""/>$ mkdir -p /mnt/test/mdt
216 <anchor xml:id="dbdoclet.50438203_pgfId-1290342" xreflabel=""/>$ mount -t lustre /dev/sda /mnt/test/mdt   
217 <anchor xml:id="dbdoclet.50438203_pgfId-1290343" xreflabel=""/>$ mount -t lustre mgs@o2ib0:/lustre /mnt/mdt
218 <anchor xml:id="dbdoclet.50438203_pgfId-1290344" xreflabel=""/>$ mkfs.lustre --fsname lustre --ost --mgsnode=mds@o2ib0 /dev/sda
219 <anchor xml:id="dbdoclet.50438203_pgfId-1290345" xreflabel=""/>$ mkdir -p /mnt/test/mdt
220 <anchor xml:id="dbdoclet.50438203_pgfId-1290346" xreflabel=""/>$ mount -t lustre /dev/sda /mnt/test/ost   
221 <anchor xml:id="dbdoclet.50438203_pgfId-1290347" xreflabel=""/>$ mount -t lustre mgs@o2ib0:/lustre /mnt/ost
222 </screen>
223         <para><anchor xml:id="dbdoclet.50438203_pgfId-1290348" xreflabel=""/> 3. Mount the clients.</para>
224         <screen><anchor xml:id="dbdoclet.50438203_pgfId-1290349" xreflabel=""/>mount -t lustre &lt;MGS node&gt;:/&lt;fsname&gt; &lt;mount point&gt;
225 </screen>
226         <para><anchor xml:id="dbdoclet.50438203_pgfId-1290350" xreflabel=""/>This example shows an IB client being mounted.</para>
227         <screen><anchor xml:id="dbdoclet.50438203_pgfId-1290351" xreflabel=""/>mount -t lustre
228 <anchor xml:id="dbdoclet.50438203_pgfId-1290352" xreflabel=""/>192.168.10.101@o2ib0,192.168.10.102@o2ib1:/mds/client /mnt/lustre
229 </screen>
230         <para><anchor xml:id="dbdoclet.50438203_pgfId-1289939" xreflabel=""/>As an example, consider a two-rail IB cluster running the OFA stack (OFED) with these IPoIB address assignments.</para>
231         <screen><anchor xml:id="dbdoclet.50438203_pgfId-1289940" xreflabel=""/>             ib0                             ib1
232 <anchor xml:id="dbdoclet.50438203_pgfId-1289941" xreflabel=""/>Servers            192.168.0.*                     192.168.1.*
233 <anchor xml:id="dbdoclet.50438203_pgfId-1289942" xreflabel=""/>Clients            192.168.[2-127].*               192.168.[128-253].*
234 </screen>
235         <para><anchor xml:id="dbdoclet.50438203_pgfId-1289943" xreflabel=""/>You could create these configurations:</para>
236         <itemizedlist><listitem>
237             <para><anchor xml:id="dbdoclet.50438203_pgfId-1289944" xreflabel=""/> A cluster with more clients than servers. The fact that an individual client cannot get two rails of bandwidth is unimportant because the servers are the actual bottleneck.</para>
238           </listitem>
239 <listitem>
240             <para> </para>
241           </listitem>
242 </itemizedlist>
243         <screen><anchor xml:id="dbdoclet.50438203_pgfId-1289945" xreflabel=""/>ip2nets=&quot;o2ib0(ib0),    o2ib1(ib1)      192.168.[0-1].*                     \
244                                             #all servers;\
245 <anchor xml:id="dbdoclet.50438203_pgfId-1289946" xreflabel=""/>                   o2ib0(ib0)      192.168.[2-253].[0-252/2]       #even cl\
246 ients;\
247 <anchor xml:id="dbdoclet.50438203_pgfId-1289947" xreflabel=""/>                   o2ib1(ib1)      192.168.[2-253].[1-253/2]       #odd cli\
248 ents&quot;
249 </screen>
250         <para><anchor xml:id="dbdoclet.50438203_pgfId-1289948" xreflabel=""/>This configuration gives every server two NIDs, one on each network, and statically load-balances clients between the rails.</para>
251         <itemizedlist><listitem>
252             <para><anchor xml:id="dbdoclet.50438203_pgfId-1289949" xreflabel=""/> A single client that must get two rails of bandwidth, and it does not matter if the maximum aggregate bandwidth is only (# servers) * (1 rail).</para>
253           </listitem>
254 <listitem>
255             <para> </para>
256           </listitem>
257 </itemizedlist>
258         <screen><anchor xml:id="dbdoclet.50438203_pgfId-1289950" xreflabel=""/>ip2nets=&quot;       o2ib0(ib0)                      192.168.[0-1].[0-252/2]     \
259                                             #even servers;\
260 <anchor xml:id="dbdoclet.50438203_pgfId-1289951" xreflabel=""/>           o2ib1(ib1)                      192.168.[0-1].[1-253/2]         \
261                                         #odd servers;\
262 <anchor xml:id="dbdoclet.50438203_pgfId-1289952" xreflabel=""/>           o2ib0(ib0),o2ib1(ib1)           192.168.[2-253].*               \
263                                         #clients&quot;
264 </screen>
265         <para><anchor xml:id="dbdoclet.50438203_pgfId-1289953" xreflabel=""/>This configuration gives every server a single NID on one rail or the other. Clients have a NID on both rails.</para>
266         <itemizedlist><listitem>
267             <para><anchor xml:id="dbdoclet.50438203_pgfId-1289954" xreflabel=""/> All clients and all servers must get two rails of bandwidth.</para>
268           </listitem>
269 <listitem>
270             <para> </para>
271           </listitem>
272 </itemizedlist>
273         <screen><anchor xml:id="dbdoclet.50438203_pgfId-1289955" xreflabel=""/>ip2nets=†  o2ib0(ib0),o2ib2(ib1)           192.168.[0-1].[0-252/2]       \
274   #even servers;\
275 <anchor xml:id="dbdoclet.50438203_pgfId-1289956" xreflabel=""/>           o2ib1(ib0),o2ib3(ib1)           192.168.[0-1].[1-253/2]         \
276 #odd servers;\
277 <anchor xml:id="dbdoclet.50438203_pgfId-1289957" xreflabel=""/>           o2ib0(ib0),o2ib3(ib1)           192.168.[2-253].[0-252/2)       \
278 #even clients;\
279 <anchor xml:id="dbdoclet.50438203_pgfId-1289958" xreflabel=""/>           o2ib1(ib0),o2ib2(ib1)           192.168.[2-253].[1-253/2)       \
280 #odd clients&quot;
281 </screen>
282         <para><anchor xml:id="dbdoclet.50438203_pgfId-1289959" xreflabel=""/>This configuration includes two additional proxy o2ib networks to work around Lustre&apos;s simplistic NID selection algorithm. It connects &quot;even&quot; clients to &quot;even&quot; servers with o2ib0 on rail0, and &quot;odd&quot; servers with o2ib3 on rail1. Similarly, it connects &quot;odd&quot; clients to &quot;odd&quot; servers with o2ib1 on rail0, and &quot;even&quot; servers with o2ib2 on rail1.</para>
283       </section>
284     </section>
285   </section>
286 </chapter>