Whamcloud - gitweb
LUDOC-394 manual: Remove extra 'held' word
[doc/manual.git] / ConfiguringLustre.xml
1 <?xml version='1.0' encoding='utf-8'?>
2 <chapter xmlns="http://docbook.org/ns/docbook"
3  xmlns:xl="http://www.w3.org/1999/xlink" version="5.0" xml:lang="en-US"
4  xml:id="configuringlustre">
5   <title xml:id="configuringlustre.title">Configuring a Lustre File
6   System</title>
7   <para>This chapter shows how to configure a simple Lustre file system
8   comprised of a combined MGS/MDT, an OST and a client. It includes:</para>
9   <itemizedlist>
10     <listitem>
11       <para>
12         <xref linkend="lustre_configure" />
13       </para>
14     </listitem>
15     <listitem>
16       <para>
17         <xref linkend="lustre_configure_additional_options" />
18       </para>
19     </listitem>
20   </itemizedlist>
21   <section xml:id="lustre_configure">
22     <title>
23     <indexterm>
24       <primary>Lustre</primary>
25       <secondary>configuring</secondary>
26     </indexterm>Configuring a Simple Lustre File System</title>
27     <para>A Lustre file system can be set up in a variety of configurations by
28     using the administrative utilities provided with the Lustre software. The
29     procedure below shows how to configure a simple Lustre file system
30     consisting of a combined MGS/MDS, one OSS with two OSTs, and a client. For
31     an overview of the entire Lustre installation procedure, see 
32     <xref linkend="installoverview" />.</para>
33     <para>This configuration procedure assumes you have completed the
34     following:</para>
35     <itemizedlist>
36       <listitem>
37         <para>
38         <emphasis>
39           <emphasis role="bold">Set up and configured your hardware</emphasis>
40         </emphasis>. For more information about hardware requirements, see 
41         <xref linkend="settinguplustresystem" />.</para>
42       </listitem>
43       <listitem>
44         <para>
45         <emphasis role="bold">Downloaded and installed the Lustre
46         software.</emphasis>For more information about preparing for and
47         installing the Lustre software, see 
48         <xref linkend="installinglustre" />.</para>
49       </listitem>
50     </itemizedlist>
51     <para>The following optional steps should also be completed, if needed,
52     before the Lustre software is configured:</para>
53     <itemizedlist>
54       <listitem>
55         <para>
56         <emphasis>Set up a hardware or software RAID on block devices to be
57         used as OSTs or MDTs.</emphasis>For information about setting up RAID,
58         see the documentation for your RAID controller or 
59         <xref linkend="configuringstorage" />.</para>
60       </listitem>
61       <listitem>
62         <para>
63         <emphasis>Set up network interface bonding on Ethernet
64         interfaces.</emphasis>For information about setting up network
65         interface bonding, see 
66         <xref linkend="settingupbonding" />.</para>
67       </listitem>
68       <listitem>
69         <para>
70         <emphasis>Set</emphasis> lnet 
71         <emphasis>module parameters to specify how Lustre Networking (LNet) is
72         to be configured to work with a Lustre file system and test the LNet
73         configuration.</emphasis>LNet will, by default, use the first TCP/IP
74         interface it discovers on a system. If this network configuration is
75         sufficient, you do not need to configure LNet. LNet configuration is
76         required if you are using InfiniBand or multiple Ethernet
77         interfaces.</para>
78       </listitem>
79     </itemizedlist>
80     <para>For information about configuring LNet, see 
81     <xref linkend="configuringlnet" />. For information about testing LNet, see
82     
83     <xref linkend="lnetselftest" />.</para>
84     <itemizedlist>
85       <listitem>
86         <para>
87         <emphasis>Run the benchmark script 
88         <literal>sgpdd-survey</literal> to determine baseline performance of
89         your hardware.</emphasis>Benchmarking your hardware will simplify
90         debugging performance issues that are unrelated to the Lustre software
91         and ensure you are getting the best possible performance with your
92         installation. For information about running 
93         <literal>sgpdd-survey</literal>, see 
94         <xref linkend="benchmarkingtests" />.</para>
95       </listitem>
96     </itemizedlist>
97     <note>
98       <para>The 
99       <literal>sgpdd-survey</literal> script overwrites the device being tested
100       so it must be run before the OSTs are configured.</para>
101     </note>
102     <para>To configure a simple Lustre file system, complete these
103     steps:</para>
104     <orderedlist>
105       <listitem>
106         <para>Create a combined MGS/MDT file system on a block device. On the
107         MDS node, run:</para>
108         <screen>
109 mkfs.lustre --fsname=
110 <replaceable>fsname</replaceable> --mgs --mdt --index=0 
111 <replaceable>/dev/block_device</replaceable>
112 </screen>
113         <para>The default file system name (
114         <literal>fsname</literal>) is 
115         <literal>lustre</literal>.</para>
116         <note>
117           <para>If you plan to create multiple file systems, the MGS should be
118           created separately on its own dedicated block device, by
119           running:</para>
120           <screen>
121 mkfs.lustre --fsname=
122 <replaceable>fsname</replaceable> --mgs 
123 <replaceable>/dev/block_device</replaceable>
124 </screen>
125           <para>See 
126           <xref linkend="lustre_configure_multiple_fs" />for more details.</para>
127         </note>
128       </listitem>
129       <listitem xml:id="addmdtindex">
130         <para>Optionally add in additional MDTs.</para>
131         <screen>
132 mkfs.lustre --fsname=
133 <replaceable>fsname</replaceable> --mgsnode=
134 <replaceable>nid</replaceable> --mdt --index=1 
135 <replaceable>/dev/block_device</replaceable>
136 </screen>
137         <note>
138           <para>Up to 4095 additional MDTs can be added.</para>
139         </note>
140       </listitem>
141       <listitem>
142         <para>Mount the combined MGS/MDT file system on the block device. On
143         the MDS node, run:</para>
144         <screen>
145 mount -t lustre 
146 <replaceable>/dev/block_device</replaceable> 
147 <replaceable>/mount_point</replaceable>
148 </screen>
149         <note>
150           <para>If you have created an MGS and an MDT on separate block
151           devices, mount them both.</para>
152         </note>
153       </listitem>
154       <listitem xml:id="format_ost">
155         <para>Create the OST. On the OSS node, run:</para>
156         <screen>
157 mkfs.lustre --fsname=
158 <replaceable>fsname</replaceable> --mgsnode=
159 <replaceable>MGS_NID</replaceable> --ost --index=
160 <replaceable>OST_index</replaceable> 
161 <replaceable>/dev/block_device</replaceable>
162 </screen>
163         <para>When you create an OST, you are formatting a 
164         <literal>ldiskfs</literal> or 
165         <literal>ZFS</literal> file system on a block storage device like you
166         would with any local file system.</para>
167         <para>You can have as many OSTs per OSS as the hardware or drivers
168         allow. For more information about storage and memory requirements for a
169         Lustre file system, see 
170         <xref linkend="settinguplustresystem" />.</para>
171         <para>You can only configure one OST per block device. You should
172         create an OST that uses the raw block device and does not use
173         partitioning.</para>
174         <para>You should specify the OST index number at format time in order
175         to simplify translating the OST number in error messages or file
176         striping to the OSS node and block device later on.</para>
177         <para>If you are using block devices that are accessible from multiple
178         OSS nodes, ensure that you mount the OSTs from only one OSS node at at
179         time. It is strongly recommended that multiple-mount protection be
180         enabled for such devices to prevent serious data corruption. For more
181         information about multiple-mount protection, see 
182         <xref linkend="managingfailover" />.</para>
183         <note>
184           <para>The Lustre software currently supports block devices up to 128
185           TB on Red Hat Enterprise Linux 5 and 6 (up to 8 TB on other
186           distributions). If the device size is only slightly larger that 16
187           TB, it is recommended that you limit the file system size to 16 TB at
188           format time. We recommend that you not place DOS partitions on top of
189           RAID 5/6 block devices due to negative impacts on performance, but
190           instead format the whole disk for the file system.</para>
191         </note>
192       </listitem>
193       <listitem xml:id="mount_ost">
194         <para>Mount the OST. On the OSS node where the OST was created,
195         run:</para>
196         <screen>
197 mount -t lustre 
198 <replaceable>/dev/block_device</replaceable> 
199 <replaceable>/mount_point</replaceable>
200 </screen>
201         <note>
202           <para>To create additional OSTs, repeat Step 
203           <xref linkend="format_ost" />and Step 
204           <xref linkend="mount_ost" />, specifying the
205           next higher OST index number.</para>
206         </note>
207       </listitem>
208       <listitem xml:id="mount_on_client">
209         <para>Mount the Lustre file system on the client. On the client node,
210         run:</para>
211         <screen>
212 mount -t lustre 
213 <replaceable>MGS_node</replaceable>:/
214 <replaceable>fsname</replaceable> 
215 <replaceable>/mount_point</replaceable> 
216 </screen>
217         <note>
218           <para>To mount the filesystem on additional clients, repeat Step 
219           <xref linkend="mount_on_client" />.</para>
220         </note>
221         <note>
222           <para>If you have a problem mounting the file system, check the
223           syslogs on the client and all the servers for errors and also check
224           the network settings. A common issue with newly-installed systems is
225           that 
226           <literal>hosts.deny</literal> or firewall rules may prevent
227           connections on port 988.</para>
228         </note>
229       </listitem>
230       <listitem>
231         <para>Verify that the file system started and is working correctly. Do
232         this by running 
233         <literal>lfs df</literal>, 
234         <literal>dd</literal> and 
235         <literal>ls</literal> commands on the client node.</para>
236       </listitem>
237       <listitem>
238         <para>
239         <emphasis>(Optional)</emphasis>Run benchmarking tools to validate the
240         performance of hardware and software layers in the cluster. Available
241         tools include:</para>
242         <itemizedlist>
243           <listitem>
244             <para>
245             <literal>obdfilter-survey</literal>- Characterizes the storage
246             performance of a Lustre file system. For details, see 
247             <xref linkend="benchmark.ost_perf" />.</para>
248           </listitem>
249           <listitem>
250             <para>
251             <literal>ost-survey</literal>- Performs I/O against OSTs to detect
252             anomalies between otherwise identical disk subsystems. For details,
253             see 
254             <xref linkend="benchmark.ost_io" />.</para>
255           </listitem>
256         </itemizedlist>
257       </listitem>
258     </orderedlist>
259     <section remap="h3">
260       <title>
261       <indexterm>
262         <primary>Lustre</primary>
263         <secondary>configuring</secondary>
264         <tertiary>simple example</tertiary>
265       </indexterm>Simple Lustre Configuration Example</title>
266       <para>To see the steps to complete for a simple Lustre file system
267       configuration, follow this example in which a combined MGS/MDT and two
268       OSTs are created to form a file system called 
269       <literal>temp</literal>. Three block devices are used, one for the
270       combined MGS/MDS node and one for each OSS node. Common parameters used
271       in the example are listed below, along with individual node
272       parameters.</para>
273       <informaltable frame="all">
274         <tgroup cols="4">
275           <colspec colname="c1" colwidth="2*" />
276           <colspec colname="c2" colwidth="25*" />
277           <colspec colname="c3" colwidth="25*" />
278           <colspec colname="c4" colwidth="25*" />
279           <thead>
280             <row>
281               <entry nameend="c2" namest="c1">
282                 <para>
283                   <emphasis role="bold">Common Parameters</emphasis>
284                 </para>
285               </entry>
286               <entry>
287                 <para>
288                   <emphasis role="bold">Value</emphasis>
289                 </para>
290               </entry>
291               <entry>
292                 <para>
293                   <emphasis role="bold">Description</emphasis>
294                 </para>
295               </entry>
296             </row>
297           </thead>
298           <tbody>
299             <row>
300               <entry>
301                 <para>&#160;</para>
302               </entry>
303               <entry>
304                 <para>
305                   <emphasis role="bold">MGS node</emphasis>
306                 </para>
307               </entry>
308               <entry>
309                 <para>
310                   <literal>10.2.0.1@tcp0</literal>
311                 </para>
312               </entry>
313               <entry>
314                 <para>Node for the combined MGS/MDS</para>
315               </entry>
316             </row>
317             <row>
318               <entry>
319                 <para>&#160;</para>
320               </entry>
321               <entry>
322                 <para>
323                   <emphasis role="bold">file system</emphasis>
324                 </para>
325               </entry>
326               <entry>
327                 <para>
328                   <literal>temp</literal>
329                 </para>
330               </entry>
331               <entry>
332                 <para>Name of the Lustre file system</para>
333               </entry>
334             </row>
335             <row>
336               <entry>
337                 <para>&#160;</para>
338               </entry>
339               <entry>
340                 <para>
341                   <emphasis role="bold">network type</emphasis>
342                 </para>
343               </entry>
344               <entry>
345                 <para>
346                   <literal>TCP/IP</literal>
347                 </para>
348               </entry>
349               <entry>
350                 <para>Network type used for Lustre file system 
351                 <literal>temp</literal></para>
352               </entry>
353             </row>
354           </tbody>
355         </tgroup>
356       </informaltable>
357       <informaltable frame="all">
358         <tgroup cols="4">
359           <colspec colname="c1" colwidth="25*" />
360           <colspec colname="c2" colwidth="25*" />
361           <colspec colname="c3" colwidth="25*" />
362           <colspec colname="c4" colwidth="25*" />
363           <thead>
364             <row>
365               <entry nameend="c2" namest="c1">
366                 <para>
367                   <emphasis role="bold">Node Parameters</emphasis>
368                 </para>
369               </entry>
370               <entry>
371                 <para>
372                   <emphasis role="bold">Value</emphasis>
373                 </para>
374               </entry>
375               <entry>
376                 <para>
377                   <emphasis role="bold">Description</emphasis>
378                 </para>
379               </entry>
380             </row>
381           </thead>
382           <tbody>
383             <row>
384               <entry nameend="c4" namest="c1">
385                 <para>MGS/MDS node</para>
386               </entry>
387             </row>
388             <row>
389               <entry>
390                 <para>&#160;</para>
391               </entry>
392               <entry>
393                 <para>
394                   <emphasis role="bold">MGS/MDS node</emphasis>
395                 </para>
396               </entry>
397               <entry>
398                 <para>
399                   <literal>mdt0</literal>
400                 </para>
401               </entry>
402               <entry>
403                 <para>MDS in Lustre file system 
404                 <literal>temp</literal></para>
405               </entry>
406             </row>
407             <row>
408               <entry>
409                 <para>&#160;</para>
410               </entry>
411               <entry>
412                 <para>
413                   <emphasis role="bold">block device</emphasis>
414                 </para>
415               </entry>
416               <entry>
417                 <para>
418                   <literal>/dev/sdb</literal>
419                 </para>
420               </entry>
421               <entry>
422                 <para>Block device for the combined MGS/MDS node</para>
423               </entry>
424             </row>
425             <row>
426               <entry>
427                 <para>&#160;</para>
428               </entry>
429               <entry>
430                 <para>
431                   <emphasis role="bold">mount point</emphasis>
432                 </para>
433               </entry>
434               <entry>
435                 <para>
436                   <literal>/mnt/mdt</literal>
437                 </para>
438               </entry>
439               <entry>
440                 <para>Mount point for the 
441                 <literal>mdt0</literal> block device (
442                 <literal>/dev/sdb</literal>) on the MGS/MDS node</para>
443               </entry>
444             </row>
445             <row>
446               <entry nameend="c4" namest="c1">
447                 <para>First OSS node</para>
448               </entry>
449             </row>
450             <row>
451               <entry>
452                 <para>&#160;</para>
453               </entry>
454               <entry>
455                 <para>
456                   <emphasis role="bold">OSS node</emphasis>
457                 </para>
458               </entry>
459               <entry>
460                 <para>
461                   <literal>oss0</literal>
462                 </para>
463               </entry>
464               <entry>
465                 <para>First OSS node in Lustre file system 
466                 <literal>temp</literal></para>
467               </entry>
468             </row>
469             <row>
470               <entry>
471                 <para>&#160;</para>
472               </entry>
473               <entry>
474                 <para>
475                   <emphasis role="bold">OST</emphasis>
476                 </para>
477               </entry>
478               <entry>
479                 <para>
480                   <literal>ost0</literal>
481                 </para>
482               </entry>
483               <entry>
484                 <para>First OST in Lustre file system 
485                 <literal>temp</literal></para>
486               </entry>
487             </row>
488             <row>
489               <entry>
490                 <para>&#160;</para>
491               </entry>
492               <entry>
493                 <para>
494                   <emphasis role="bold">block device</emphasis>
495                 </para>
496               </entry>
497               <entry>
498                 <para>
499                   <literal>/dev/sdc</literal>
500                 </para>
501               </entry>
502               <entry>
503                 <para>Block device for the first OSS node (
504                 <literal>oss0</literal>)</para>
505               </entry>
506             </row>
507             <row>
508               <entry>
509                 <para>&#160;</para>
510               </entry>
511               <entry>
512                 <para>
513                   <emphasis role="bold">mount point</emphasis>
514                 </para>
515               </entry>
516               <entry>
517                 <para>
518                   <literal>/mnt/ost0</literal>
519                 </para>
520               </entry>
521               <entry>
522                 <para>Mount point for the 
523                 <literal>ost0</literal> block device (
524                 <literal>/dev/sdc</literal>) on the 
525                 <literal>oss1</literal> node</para>
526               </entry>
527             </row>
528             <row>
529               <entry nameend="c4" namest="c1">
530                 <para>Second OSS node</para>
531               </entry>
532             </row>
533             <row>
534               <entry>
535                 <para></para>
536               </entry>
537               <entry>
538                 <para>
539                   <emphasis role="bold">OSS node</emphasis>
540                 </para>
541               </entry>
542               <entry>
543                 <para>
544                   <literal>oss1</literal>
545                 </para>
546               </entry>
547               <entry>
548                 <para>Second OSS node in Lustre file system 
549                 <literal>temp</literal></para>
550               </entry>
551             </row>
552             <row>
553               <entry>
554                 <para></para>
555               </entry>
556               <entry>
557                 <para>
558                   <emphasis role="bold">OST</emphasis>
559                 </para>
560               </entry>
561               <entry>
562                 <para>
563                   <literal>ost1</literal>
564                 </para>
565               </entry>
566               <entry>
567                 <para>Second OST in Lustre file system 
568                 <literal>temp</literal></para>
569               </entry>
570             </row>
571             <row>
572               <entry />
573               <entry>
574                 <para>
575                   <emphasis role="bold">block device</emphasis>
576                 </para>
577               </entry>
578               <entry>
579                 <para>
580                   <literal>/dev/sdd</literal>
581                 </para>
582               </entry>
583               <entry>
584                 <para>Block device for the second OSS node (oss1)</para>
585               </entry>
586             </row>
587             <row>
588               <entry>
589                 <para></para>
590               </entry>
591               <entry>
592                 <para>
593                   <emphasis role="bold">mount point</emphasis>
594                 </para>
595               </entry>
596               <entry>
597                 <para>
598                   <literal>/mnt/ost1</literal>
599                 </para>
600               </entry>
601               <entry>
602                 <para>Mount point for the 
603                 <literal>ost1</literal> block device (
604                 <literal>/dev/sdd</literal>) on the 
605                 <literal>oss1</literal> node</para>
606               </entry>
607             </row>
608             <row>
609               <entry nameend="c4" namest="c1">
610                 <para>Client node</para>
611               </entry>
612             </row>
613             <row>
614               <entry>
615                 <para></para>
616               </entry>
617               <entry>
618                 <para>
619                   <emphasis role="bold">client node</emphasis>
620                 </para>
621               </entry>
622               <entry>
623                 <para>
624                   <literal>client1</literal>
625                 </para>
626               </entry>
627               <entry>
628                 <para>Client in Lustre file system 
629                 <literal>temp</literal></para>
630               </entry>
631             </row>
632             <row>
633               <entry>
634                 <para></para>
635               </entry>
636               <entry>
637                 <para>
638                   <emphasis role="bold">mount point</emphasis>
639                 </para>
640               </entry>
641               <entry>
642                 <para>
643                   <literal>/lustre</literal>
644                 </para>
645               </entry>
646               <entry>
647                 <para>Mount point for Lustre file system 
648                 <literal>temp</literal> on the 
649                 <literal>client1</literal> node</para>
650               </entry>
651             </row>
652           </tbody>
653         </tgroup>
654       </informaltable>
655       <note>
656         <para>We recommend that you use 'dotted-quad' notation for IP addresses
657         rather than host names to make it easier to read debug logs and debug
658         configurations with multiple interfaces.</para>
659       </note>
660       <para>For this example, complete the steps below:</para>
661       <orderedlist>
662         <listitem>
663           <para>Create a combined MGS/MDT file system on the block device. On
664           the MDS node, run:</para>
665           <screen>
666 [root@mds /]# mkfs.lustre --fsname=temp --mgs --mdt --index=0 /dev/sdb
667 </screen>
668           <para>This command generates this output:</para>
669           <screen>
670     Permanent disk data:
671 Target:            temp-MDT0000
672 Index:             0
673 Lustre FS: temp
674 Mount type:        ldiskfs
675 Flags:             0x75
676    (MDT MGS first_time update )
677 Persistent mount opts: errors=remount-ro,iopen_nopriv,user_xattr
678 Parameters: mdt.identity_upcall=/usr/sbin/l_getidentity
679  
680 checking for existing Lustre data: not found
681 device size = 16MB
682 2 6 18
683 formatting backing filesystem ldiskfs on /dev/sdb
684    target name             temp-MDTffff
685    4k blocks               0
686    options                 -i 4096 -I 512 -q -O dir_index,uninit_groups -F
687 mkfs_cmd = mkfs.ext2 -j -b 4096 -L temp-MDTffff  -i 4096 -I 512 -q -O 
688 dir_index,uninit_groups -F /dev/sdb
689 Writing CONFIGS/mountdata 
690 </screen>
691         </listitem>
692         <listitem>
693           <para>Mount the combined MGS/MDT file system on the block device. On
694           the MDS node, run:</para>
695           <screen>
696 [root@mds /]# mount -t lustre /dev/sdb /mnt/mdt
697 </screen>
698           <para>This command generates this output:</para>
699           <screen>
700 Lustre: temp-MDT0000: new disk, initializing 
701 Lustre: 3009:0:(lproc_mds.c:262:lprocfs_wr_identity_upcall()) temp-MDT0000:
702 group upcall set to /usr/sbin/l_getidentity
703 Lustre: temp-MDT0000.mdt: set parameter identity_upcall=/usr/sbin/l_getidentity
704 Lustre: Server temp-MDT0000 on device /dev/sdb has started 
705 </screen>
706         </listitem>
707         <listitem xml:id="create_and_mount_ost">
708           <para>Create and mount 
709           <literal>ost0</literal>.</para>
710           <para>In this example, the OSTs (
711           <literal>ost0</literal> and 
712           <literal>ost1</literal>) are being created on different OSS nodes (
713           <literal>oss0</literal> and 
714           <literal>oss1</literal> respectively).</para>
715           <orderedlist>
716             <listitem>
717               <para>Create 
718               <literal>ost0</literal>. On 
719               <literal>oss0</literal> node, run:</para>
720               <screen>
721 [root@oss0 /]# mkfs.lustre --fsname=temp --mgsnode=10.2.0.1@tcp0 --ost
722 --index=0 /dev/sdc
723 </screen>
724               <para>The command generates this output:</para>
725               <screen>
726     Permanent disk data:
727 Target:            temp-OST0000
728 Index:             0
729 Lustre FS: temp
730 Mount type:        ldiskfs
731 Flags:             0x72
732 (OST first_time update)
733 Persistent mount opts: errors=remount-ro,extents,mballoc
734 Parameters: mgsnode=10.2.0.1@tcp
735  
736 checking for existing Lustre data: not found
737 device size = 16MB
738 2 6 18
739 formatting backing filesystem ldiskfs on /dev/sdc
740    target name             temp-OST0000
741    4k blocks               0
742    options                 -I 256 -q -O dir_index,uninit_groups -F
743 mkfs_cmd = mkfs.ext2 -j -b 4096 -L temp-OST0000  -I 256 -q -O
744 dir_index,uninit_groups -F /dev/sdc
745 Writing CONFIGS/mountdata 
746 </screen>
747             </listitem>
748             <listitem>
749               <para>Mount ost0 on the OSS on which it was created. On 
750               <literal>oss0</literal> node, run:</para>
751               <screen>
752 root@oss0 /] mount -t lustre /dev/sdc /mnt/ost0
753 </screen>
754               <para>The command generates this output:</para>
755               <screen>
756 LDISKFS-fs: file extents enabled 
757 LDISKFS-fs: mballoc enabled
758 Lustre: temp-OST0000: new disk, initializing
759 Lustre: Server temp-OST0000 on device /dev/sdb has started
760 </screen>
761               <para>Shortly afterwards, this output appears:</para>
762               <screen>
763 Lustre: temp-OST0000: received MDS connection from 10.2.0.1@tcp0
764 Lustre: MDS temp-MDT0000: temp-OST0000_UUID now active, resetting orphans 
765 </screen>
766             </listitem>
767           </orderedlist>
768         </listitem>
769         <listitem>
770           <para>Create and mount 
771           <literal>ost1</literal>.</para>
772           <orderedlist>
773             <listitem>
774               <para>Create ost1. On 
775               <literal>oss1</literal> node, run:</para>
776               <screen>
777 [root@oss1 /]# mkfs.lustre --fsname=temp --mgsnode=10.2.0.1@tcp0 \
778            --ost --index=1 /dev/sdd
779 </screen>
780               <para>The command generates this output:</para>
781               <screen>
782     Permanent disk data:
783 Target:            temp-OST0001
784 Index:             1
785 Lustre FS: temp
786 Mount type:        ldiskfs
787 Flags:             0x72
788 (OST first_time update)
789 Persistent mount opts: errors=remount-ro,extents,mballoc
790 Parameters: mgsnode=10.2.0.1@tcp
791  
792 checking for existing Lustre data: not found
793 device size = 16MB
794 2 6 18
795 formatting backing filesystem ldiskfs on /dev/sdd
796    target name             temp-OST0001
797    4k blocks               0
798    options                 -I 256 -q -O dir_index,uninit_groups -F
799 mkfs_cmd = mkfs.ext2 -j -b 4096 -L temp-OST0001  -I 256 -q -O
800 dir_index,uninit_groups -F /dev/sdc
801 Writing CONFIGS/mountdata 
802 </screen>
803             </listitem>
804             <listitem>
805               <para>Mount ost1 on the OSS on which it was created. On 
806               <literal>oss1</literal> node, run:</para>
807               <screen>
808 root@oss1 /] mount -t lustre /dev/sdd /mnt/ost1 
809 </screen>
810               <para>The command generates this output:</para>
811               <screen>
812 LDISKFS-fs: file extents enabled 
813 LDISKFS-fs: mballoc enabled
814 Lustre: temp-OST0001: new disk, initializing
815 Lustre: Server temp-OST0001 on device /dev/sdb has started
816 </screen>
817               <para>Shortly afterwards, this output appears:</para>
818               <screen>
819 Lustre: temp-OST0001: received MDS connection from 10.2.0.1@tcp0
820 Lustre: MDS temp-MDT0000: temp-OST0001_UUID now active, resetting orphans 
821 </screen>
822             </listitem>
823           </orderedlist>
824         </listitem>
825         <listitem>
826           <para>Mount the Lustre file system on the client. On the client node,
827           run:</para>
828           <screen>
829 root@client1 /] mount -t lustre 10.2.0.1@tcp0:/temp /lustre 
830 </screen>
831           <para>This command generates this output:</para>
832           <screen>
833 Lustre: Client temp-client has started
834 </screen>
835         </listitem>
836         <listitem>
837           <para>Verify that the file system started and is working by running
838           the 
839           <literal>df</literal>, 
840           <literal>dd</literal> and 
841           <literal>ls</literal> commands on the client node.</para>
842           <orderedlist>
843             <listitem>
844               <para>Run the 
845               <literal>lfs df -h</literal> command:</para>
846               <screen>
847 [root@client1 /] lfs df -h 
848 </screen>
849               <para>The 
850               <literal>lfs df -h</literal> command lists space usage per OST and
851               the MDT in human-readable format. This command generates output
852               similar to this:</para>
853               <screen>
854 UUID               bytes      Used      Available   Use%    Mounted on
855 temp-MDT0000_UUID  8.0G      400.0M       7.6G        0%      /lustre[MDT:0]
856 temp-OST0000_UUID  800.0G    400.0M     799.6G        0%      /lustre[OST:0]
857 temp-OST0001_UUID  800.0G    400.0M     799.6G        0%      /lustre[OST:1]
858 filesystem summary:  1.6T    800.0M       1.6T        0%      /lustre
859 </screen>
860             </listitem>
861             <listitem>
862               <para>Run the 
863               <literal>lfs df -ih</literal> command.</para>
864               <screen>
865 [root@client1 /] lfs df -ih
866 </screen>
867               <para>The 
868               <literal>lfs df -ih</literal> command lists inode usage per OST
869               and the MDT. This command generates output similar to
870               this:</para>
871               <screen>
872 UUID              Inodes      IUsed       IFree   IUse%     Mounted on
873 temp-MDT0000_UUID   2.5M        32         2.5M      0%       /lustre[MDT:0]
874 temp-OST0000_UUID   5.5M        54         5.5M      0%       /lustre[OST:0]
875 temp-OST0001_UUID   5.5M        54         5.5M      0%       /lustre[OST:1]
876 filesystem summary: 2.5M        32         2.5M      0%       /lustre
877 </screen>
878             </listitem>
879             <listitem>
880               <para>Run the 
881               <literal>dd</literal> command:</para>
882               <screen>
883 [root@client1 /] cd /lustre
884 [root@client1 /lustre] dd if=/dev/zero of=/lustre/zero.dat bs=4M count=2
885 </screen>
886               <para>The 
887               <literal>dd</literal> command verifies write functionality by
888               creating a file containing all zeros (
889               <literal>0</literal>s). In this command, an 8 MB file is created.
890               This command generates output similar to this:</para>
891               <screen>
892 2+0 records in
893 2+0 records out
894 8388608 bytes (8.4 MB) copied, 0.159628 seconds, 52.6 MB/s
895 </screen>
896             </listitem>
897             <listitem>
898               <para>Run the 
899               <literal>ls</literal> command:</para>
900               <screen>
901 [root@client1 /lustre] ls -lsah
902 </screen>
903               <para>The 
904               <literal>ls -lsah</literal> command lists files and directories in
905               the current working directory. This command generates output
906               similar to this:</para>
907               <screen>
908 total 8.0M
909 4.0K drwxr-xr-x  2 root root 4.0K Oct 16 15:27 .
910 8.0K drwxr-xr-x 25 root root 4.0K Oct 16 15:27 ..
911 8.0M -rw-r--r--  1 root root 8.0M Oct 16 15:27 zero.dat 
912  
913 </screen>
914             </listitem>
915           </orderedlist>
916         </listitem>
917       </orderedlist>
918       <para>Once the Lustre file system is configured, it is ready for
919       use.</para>
920     </section>
921   </section>
922   <section xml:id="lustre_configure_additional_options">
923     <title>
924     <indexterm>
925       <primary>Lustre</primary>
926       <secondary>configuring</secondary>
927       <tertiary>additional options</tertiary>
928     </indexterm>Additional Configuration Options</title>
929     <para>This section describes how to scale the Lustre file system or make
930     configuration changes using the Lustre configuration utilities.</para>
931     <section remap="h3">
932       <title>
933       <indexterm>
934         <primary>Lustre</primary>
935         <secondary>configuring</secondary>
936         <tertiary>for scale</tertiary>
937       </indexterm>Scaling the Lustre File System</title>
938       <para>A Lustre file system can be scaled by adding OSTs or clients. For
939       instructions on creating additional OSTs repeat Step 
940       <xref linkend="create_and_mount_ost" />and Step 
941       <xref linkend="mount_ost" />above. For mounting
942       additional clients, repeat Step 
943       <xref linkend="mount_on_client" />for each client.</para>
944     </section>
945     <section remap="h3">
946       <title>
947       <indexterm>
948         <primary>Lustre</primary>
949         <secondary>configuring</secondary>
950         <tertiary>striping</tertiary>
951       </indexterm>Changing Striping Defaults</title>
952       <para>The default settings for the file layout stripe pattern are shown
953       in 
954       <xref linkend="configuringlustre.tab.stripe" />.</para>
955       <table frame="none" xml:id="configuringlustre.tab.stripe">
956         <title>Default stripe pattern</title>
957         <tgroup cols="3">
958           <colspec colname="c1" colwidth="13*" />
959           <colspec colname="c2" colwidth="13*" />
960           <colspec colname="c3" colwidth="13*" />
961           <tbody>
962             <row>
963               <entry>
964                 <para>
965                   <emphasis role="bold">File Layout Parameter</emphasis>
966                 </para>
967               </entry>
968               <entry>
969                 <para>
970                   <emphasis role="bold">Default</emphasis>
971                 </para>
972               </entry>
973               <entry>
974                 <para>
975                   <emphasis role="bold">Description</emphasis>
976                 </para>
977               </entry>
978             </row>
979             <row>
980               <entry>
981                 <para>
982                   <literal>stripe_size</literal>
983                 </para>
984               </entry>
985               <entry>
986                 <para>1 MB</para>
987               </entry>
988               <entry>
989                 <para>Amount of data to write to one OST before moving to the
990                 next OST.</para>
991               </entry>
992             </row>
993             <row>
994               <entry>
995                 <para>
996                   <literal>stripe_count</literal>
997                 </para>
998               </entry>
999               <entry>
1000                 <para>1</para>
1001               </entry>
1002               <entry>
1003                 <para>The number of OSTs to use for a single file.</para>
1004               </entry>
1005             </row>
1006             <row>
1007               <entry>
1008                 <para>
1009                   <literal>start_ost</literal>
1010                 </para>
1011               </entry>
1012               <entry>
1013                 <para>-1</para>
1014               </entry>
1015               <entry>
1016                 <para>The first OST where objects are created for each file.
1017                 The default -1 allows the MDS to choose the starting index
1018                 based on available space and load balancing. 
1019                 <emphasis>It's strongly recommended not to change the default
1020                 for this parameter to a value other than -1.</emphasis></para>
1021               </entry>
1022             </row>
1023           </tbody>
1024         </tgroup>
1025       </table>
1026       <para>Use the 
1027       <literal>lfs setstripe</literal> command described in 
1028       <xref linkend="managingstripingfreespace" />to change the file layout
1029       configuration.</para>
1030     </section>
1031     <section remap="h3">
1032       <title>
1033       <indexterm>
1034         <primary>Lustre</primary>
1035         <secondary>configuring</secondary>
1036         <tertiary>utilities</tertiary>
1037       </indexterm>Using the Lustre Configuration Utilities</title>
1038       <para>If additional configuration is necessary, several configuration
1039       utilities are available:</para>
1040       <itemizedlist>
1041         <listitem>
1042           <para>
1043           <literal>mkfs.lustre</literal>- Use to format a disk for a Lustre
1044           service.</para>
1045         </listitem>
1046         <listitem>
1047           <para>
1048           <literal>tunefs.lustre</literal>- Use to modify configuration
1049           information on a Lustre target disk.</para>
1050         </listitem>
1051         <listitem>
1052           <para>
1053           <literal>lctl</literal>- Use to directly control Lustre features via
1054           an 
1055           <literal>ioctl</literal> interface, allowing various configuration,
1056           maintenance and debugging features to be accessed.</para>
1057         </listitem>
1058         <listitem>
1059           <para>
1060           <literal>mount.lustre</literal>- Use to start a Lustre client or
1061           target service.</para>
1062         </listitem>
1063       </itemizedlist>
1064       <para>For examples using these utilities, see the topic 
1065       <xref linkend="systemconfigurationutilities" /></para>
1066       <para>The 
1067       <literal>lfs</literal> utility is useful for configuring and querying a
1068       variety of options related to files. For more information, see 
1069       <xref linkend="userutilities" />.</para>
1070       <note>
1071         <para>Some sample scripts are included in the directory where the
1072         Lustre software is installed. If you have installed the Lustre source
1073         code, the scripts are located in the 
1074         <literal>lustre/tests</literal> sub-directory. These scripts enable
1075         quick setup of some simple standard Lustre configurations.</para>
1076       </note>
1077     </section>
1078   </section>
1079 </chapter>
1080 <!--
1081   vim:expandtab:shiftwidth=2:tabstop=8:
1082   -->