1 <?xml version='1.0' encoding='UTF-8'?>
2 <chapter xmlns="http://docbook.org/ns/docbook" xmlns:xl="http://www.w3.org/1999/xlink" version="5.0" xml:lang="en-US" xml:id="settinguplustresystem">
3 <title xml:id="settinguplustresystem.title">Determining Hardware Configuration Requirements and
4 Formatting Options</title>
5 <para>This chapter describes hardware configuration requirements for a Lustre file system
10 <xref linkend="dbdoclet.50438256_49017"/>
15 <xref linkend="dbdoclet.space_requirements"/>
20 <xref linkend="dbdoclet.ldiskfs_mkfs_opts"/>
25 <xref linkend="dbdoclet.50438256_26456"/>
30 <xref linkend="dbdoclet.50438256_78272"/>
34 <section xml:id="dbdoclet.50438256_49017">
35 <title><indexterm><primary>setup</primary></indexterm>
36 <indexterm><primary>setup</primary><secondary>hardware</secondary></indexterm>
37 <indexterm><primary>design</primary><see>setup</see></indexterm>
38 Hardware Considerations</title>
39 <para>A Lustre file system can utilize any kind of block storage device such as single disks,
40 software RAID, hardware RAID, or a logical volume manager. In contrast to some networked file
41 systems, the block devices are only attached to the MDS and OSS nodes in a Lustre file system
42 and are not accessed by the clients directly.</para>
43 <para>Since the block devices are accessed by only one or two server nodes, a storage area network (SAN) that is accessible from all the servers is not required. Expensive switches are not needed because point-to-point connections between the servers and the storage arrays normally provide the simplest and best attachments. (If failover capability is desired, the storage must be attached to multiple servers.)</para>
44 <para>For a production environment, it is preferable that the MGS have separate storage to allow future expansion to multiple file systems. However, it is possible to run the MDS and MGS on the same machine and have them share the same storage device.</para>
45 <para>For best performance in a production environment, dedicated clients are required. For a non-production Lustre environment or for testing, a Lustre client and server can run on the same machine. However, dedicated clients are the only supported configuration.</para>
46 <warning><para>Performance and recovery issues can occur if you put a client on an MDS or OSS:</para>
49 <para>Running the OSS and a client on the same machine can cause issues with low memory and memory pressure. If the client consumes all the memory and then tries to write data to the file system, the OSS will need to allocate pages to receive data from the client but will not be able to perform this operation due to low memory. This can cause the client to hang.</para>
52 <para>Running the MDS and a client on the same machine can cause recovery and deadlock issues and impact the performance of other Lustre clients.</para>
56 <para>Only servers running on 64-bit CPUs are tested and supported. 64-bit CPU clients are
57 typically used for testing to match expected customer usage and avoid limitations due to the 4
58 GB limit for RAM size, 1 GB low-memory limitation, and 16 TB file size limit of 32-bit CPUs.
59 Also, due to kernel API limitations, performing backups of Lustre software release 2.x. file
60 systems on 32-bit clients may cause backup tools to confuse files that have the same 32-bit
62 <para>The storage attached to the servers typically uses RAID to provide fault tolerance and can
63 optionally be organized with logical volume management (LVM), which is then formatted as a
64 Lustre file system. Lustre OSS and MDS servers read, write and modify data in the format
65 imposed by the file system.</para>
66 <para>The Lustre file system uses journaling file system technology on both the MDTs and OSTs.
67 For a MDT, as much as a 20 percent performance gain can be obtained by placing the journal on
68 a separate device.</para>
69 <para>The MDS can effectively utilize a lot of CPU cycles. A minimum of four processor cores are recommended. More are advisable for files systems with many clients.</para>
71 <para>Lustre clients running on architectures with different endianness are supported. One limitation is that the PAGE_SIZE kernel macro on the client must be as large as the PAGE_SIZE of the server. In particular, ia64 or PPC clients with large pages (up to 64kB pages) can run with x86 servers (4kB pages). If you are running x86 clients with ia64 or PPC servers, you must compile the ia64 kernel with a 4kB PAGE_SIZE (so the server page size is not larger than the client page size). </para>
75 <primary>setup</primary>
76 <secondary>MDT</secondary>
77 </indexterm> MGT and MDT Storage Hardware Considerations</title>
78 <para>MGT storage requirements are small (less than 100 MB even in the
79 largest Lustre file systems), and the data on an MGT is only accessed
80 on a server/client mount, so disk performance is not a consideration.
81 However, this data is vital for file system access, so
82 the MGT should be reliable storage, preferably mirrored RAID1.</para>
83 <para>MDS storage is accessed in a database-like access pattern with
84 many seeks and read-and-writes of small amounts of data.
85 Storage types that provide much lower seek times, such as SSD or NVMe
86 is strongly preferred for the MDT, and high-RPM SAS is acceptable.</para>
87 <para>For maximum performance, the MDT should be configured as RAID1 with
88 an internal journal and two disks from different controllers.</para>
89 <para>If you need a larger MDT, create multiple RAID1 devices from pairs
90 of disks, and then make a RAID0 array of the RAID1 devices. For ZFS,
91 use <literal>mirror</literal> VDEVs for the MDT. This ensures
92 maximum reliability because multiple disk failures only have a small
93 chance of hitting both disks in the same RAID1 device.</para>
94 <para>Doing the opposite (RAID1 of a pair of RAID0 devices) has a 50%
95 chance that even two disk failures can cause the loss of the whole MDT
96 device. The first failure disables an entire half of the mirror and the
97 second failure has a 50% chance of disabling the remaining mirror.</para>
98 <para condition='l24'>If multiple MDTs are going to be present in the
99 system, each MDT should be specified for the anticipated usage and load.
100 For details on how to add additional MDTs to the filesystem, see
101 <xref linkend="dbdoclet.adding_new_mdt"/>.</para>
102 <warning condition='l24'><para>MDT0 contains the root of the Lustre file
103 system. If MDT0 is unavailable for any reason, the file system cannot be
104 used.</para></warning>
105 <note condition='l24'><para>Using the DNE feature it is possible to
106 dedicate additional MDTs to sub-directories off the file system root
107 directory stored on MDT0, or arbitrarily for lower-level subdirectories.
108 using the <literal>lfs mkdir -i <replaceable>mdt_index</replaceable></literal> command.
109 If an MDT serving a subdirectory becomes unavailable, any subdirectories
110 on that MDT and all directories beneath it will also become inaccessible.
111 Configuring multiple levels of MDTs is an experimental feature for the
112 2.4 release, and is fully functional in the 2.8 release. This is
113 typically useful for top-level directories to assign different users
114 or projects to separate MDTs, or to distribute other large working sets
115 of files to multiple MDTs.</para></note>
116 <note condition='l28'><para>Starting in the 2.8 release it is possible
117 to spread a single large directory across multiple MDTs using the DNE
118 striped directory feature by specifying multiple stripes (or shards)
119 at creation time using the
120 <literal>lfs mkdir -c <replaceable>stripe_count</replaceable></literal>
121 command, where <replaceable>stripe_count</replaceable> is often the
122 number of MDTs in the filesystem. Striped directories should typically
123 not be used for all directories in the filesystem, since this incurs
124 extra overhead compared to non-striped directories, but is useful for
125 larger directories (over 50k entries) where many output files are being
130 <title><indexterm><primary>setup</primary><secondary>OST</secondary></indexterm>OST Storage Hardware Considerations</title>
131 <para>The data access pattern for the OSS storage is a streaming I/O
132 pattern that is dependent on the access patterns of applications being
133 used. Each OSS can manage multiple object storage targets (OSTs), one
134 for each volume with I/O traffic load-balanced between servers and
135 targets. An OSS should be configured to have a balance between the
136 network bandwidth and the attached storage bandwidth to prevent
137 bottlenecks in the I/O path. Depending on the server hardware, an OSS
138 typically serves between 2 and 8 targets, with each target between
139 24-48TB, but may be up to 256 terabytes (TBs) in size.</para>
140 <para>Lustre file system capacity is the sum of the capacities provided
141 by the targets. For example, 64 OSSs, each with two 8 TB OSTs,
142 provide a file system with a capacity of nearly 1 PB. If each OST uses
143 ten 1 TB SATA disks (8 data disks plus 2 parity disks in a RAID-6
144 configuration), it may be possible to get 50 MB/sec from each drive,
145 providing up to 400 MB/sec of disk bandwidth per OST. If this system
146 is used as storage backend with a system network, such as the InfiniBand
147 network, that provides a similar bandwidth, then each OSS could provide
148 800 MB/sec of end-to-end I/O throughput. (Although the architectural
149 constraints described here are simple, in practice it takes careful
150 hardware selection, benchmarking and integration to obtain such
154 <section xml:id="dbdoclet.space_requirements">
155 <title><indexterm><primary>setup</primary><secondary>space</secondary></indexterm>
156 <indexterm><primary>space</primary><secondary>determining requirements</secondary></indexterm>
157 Determining Space Requirements</title>
158 <para>The desired performance characteristics of the backing file systems
159 on the MDT and OSTs are independent of one another. The size of the MDT
160 backing file system depends on the number of inodes needed in the total
161 Lustre file system, while the aggregate OST space depends on the total
162 amount of data stored on the file system. If MGS data is to be stored
163 on the MDT device (co-located MGT and MDT), add 100 MB to the required
164 size estimate for the MDT.</para>
165 <para>Each time a file is created on a Lustre file system, it consumes
166 one inode on the MDT and one OST object over which the file is striped.
167 Normally, each file's stripe count is based on the system-wide
168 default stripe count. However, this can be changed for individual files
169 using the <literal>lfs setstripe</literal> option. For more details,
170 see <xref linkend="managingstripingfreespace"/>.</para>
171 <para>In a Lustre ldiskfs file system, all the MDT inodes and OST
172 objects are allocated when the file system is first formatted. When
173 the file system is in use and a file is created, metadata associated
174 with that file is stored in one of the pre-allocated inodes and does
175 not consume any of the free space used to store file data. The total
176 number of inodes on a formatted ldiskfs MDT or OST cannot be easily
177 changed. Thus, the number of inodes created at format time should be
178 generous enough to anticipate near term expected usage, with some room
179 for growth without the effort of additional storage.</para>
180 <para>By default, the ldiskfs file system used by Lustre servers to store
181 user-data objects and system data reserves 5% of space that cannot be used
182 by the Lustre file system. Additionally, an ldiskfs Lustre file system
183 reserves up to 400 MB on each OST, and up to 4GB on each MDT for journal
184 use and a small amount of space outside the journal to store accounting
185 data. This reserved space is unusable for general storage. Thus, at least
186 this much space will be used per OST before any file object data is saved.
188 <para condition="l24">With a ZFS backing filesystem for the MDT or OST,
189 the space allocation for inodes and file data is dynamic, and inodes are
190 allocated as needed. A minimum of 4kB of usable space (before mirroring)
191 is needed for each inode, exclusive of other overhead such as directories,
192 internal log files, extended attributes, ACLs, etc. ZFS also reserves
193 approximately 3% of the total storage space for internal and redundant
194 metadata, which is not usable by Lustre.
195 Since the size of extended attributes and ACLs is highly dependent on
196 kernel versions and site-specific policies, it is best to over-estimate
197 the amount of space needed for the desired number of inodes, and any
198 excess space will be utilized to store more inodes.
202 <primary>setup</primary>
203 <secondary>MGT</secondary>
206 <primary>space</primary>
207 <secondary>determining MGT requirements</secondary>
208 </indexterm> Determining MGT Space Requirements</title>
209 <para>Less than 100 MB of space is typically required for the MGT.
210 The size is determined by the total number of servers in the Lustre
211 file system cluster(s) that are managed by the MGS.</para>
213 <section xml:id="dbdoclet.50438256_87676">
215 <primary>setup</primary>
216 <secondary>MDT</secondary>
219 <primary>space</primary>
220 <secondary>determining MDT requirements</secondary>
221 </indexterm> Determining MDT Space Requirements</title>
222 <para>When calculating the MDT size, the important factor to consider
223 is the number of files to be stored in the file system, which depends on
224 at least 2 KiB per inode of usable space on the MDT. Since MDTs typically
225 use RAID-1+0 mirroring, the total storage needed will be double this.
227 <para>Please note that the actual used space per MDT depends on the number
228 of files per directory, the number of stripes per file, whether files
229 have ACLs or user xattrs, and the number of hard links per file. The
230 storage required for Lustre file system metadata is typically 1-2
231 percent of the total file system capacity depending upon file size.
232 If the <xref linkend="dataonmdt.title"/> feature is in use for Lustre
233 2.11 or later, MDT space should typically be 5 percent or more of the
234 total space, depending on the distribution of small files within the
235 filesystem and the <literal>lod.*.dom_stripesize</literal> limit on
236 the MDT and file layout used.</para>
237 <para>For ZFS-based MDT filesystems, the number of inodes created on
238 the MDT and OST is dynamic, so there is less need to determine the
239 number of inodes in advance, though there still needs to be some thought
240 given to the total MDT space compared to the total filesystem size.</para>
241 <para>For example, if the average file size is 5 MiB and you have
242 100 TiB of usable OST space, then you can calculate the
243 <emphasis>minimum</emphasis> total number of inodes for MDTs and OSTs
246 <para>(500 TB * 1000000 MB/TB) / 5 MB/inode = 100M inodes</para>
248 <para>It is recommended that the MDT(s) have at least twice the minimum
249 number of inodes to allow for future expansion and allow for an average
250 file size smaller than expected. Thus, the minimum space for ldiskfs
251 MDT(s) should be approximately:
254 <para>2 KiB/inode x 100 million inodes x 2 = 400 GiB ldiskfs MDT</para>
256 <para>For details about formatting options for ldiskfs MDT and OST file
257 systems, see <xref linkend="dbdoclet.ldiskfs_mdt_mkfs"/>.</para>
259 <para>If the median file size is very small, 4 KB for example, the
260 MDT would use as much space for each file as the space used on the OST,
261 so the use of Data-on-MDT is strongly recommended in that case.
262 The MDT space per inode should be increased correspondingly to
263 account for the extra data space usage for each inode:
265 <para>6 KiB/inode x 100 million inodes x 2 = 1200 GiB ldiskfs MDT</para>
270 <para>If the MDT has too few inodes, this can cause the space on the
271 OSTs to be inaccessible since no new files can be created. In this
272 case, the <literal>lfs df -i</literal> and <literal>df -i</literal>
273 commands will limit the number of available inodes reported for the
274 filesystem to match the total number of available objects on the OSTs.
275 Be sure to determine the appropriate MDT size needed to support the
276 filesystem before formatting. It is possible to increase the
277 number of inodes after the file system is formatted, depending on the
278 storage. For ldiskfs MDT filesystems the <literal>resize2fs</literal>
279 tool can be used if the underlying block device is on a LVM logical
280 volume and the underlying logical volume size can be increased.
281 For ZFS new (mirrored) VDEVs can be added to the MDT pool to increase
282 the total space available for inode storage.
283 Inodes will be added approximately in proportion to space added.
286 <note condition='l24'>
287 <para>Note that the number of total and free inodes reported by
288 <literal>lfs df -i</literal> for ZFS MDTs and OSTs is estimated based
289 on the current average space used per inode. When a ZFS filesystem is
290 first formatted, this free inode estimate will be very conservative
291 (low) due to the high ratio of directories to regular files created for
292 internal Lustre metadata storage, but this estimate will improve as
293 more files are created by regular users and the average file size will
294 better reflect actual site usage.
297 <note condition='l24'>
298 <para>Starting in release 2.4, using the DNE remote directory feature
299 it is possible to increase the total number of inodes of a Lustre
300 filesystem, as well as increasing the aggregate metadata performance,
301 by configuring additional MDTs into the filesystem, see
302 <xref linkend="dbdoclet.adding_new_mdt"/> for details.
308 <primary>setup</primary>
309 <secondary>OST</secondary>
312 <primary>space</primary>
313 <secondary>determining OST requirements</secondary>
314 </indexterm> Determining OST Space Requirements</title>
315 <para>For the OST, the amount of space taken by each object depends on
316 the usage pattern of the users/applications running on the system. The
317 Lustre software defaults to a conservative estimate for the average
318 object size (between 64 KiB per object for 10 GiB OSTs, and 1 MiB per
319 object for 16 TiB and larger OSTs). If you are confident that the average
320 file size for your applications will be different than this, you can
321 specify a different average file size (number of total inodes for a given
322 OST size) to reduce file system overhead and minimize file system check
324 See <xref linkend="dbdoclet.ldiskfs_ost_mkfs"/> for more details.</para>
327 <section xml:id="dbdoclet.ldiskfs_mkfs_opts">
330 <primary>ldiskfs</primary>
331 <secondary>formatting options</secondary>
334 <primary>setup</primary>
335 <secondary>ldiskfs</secondary>
337 Setting ldiskfs File System Formatting Options
339 <para>By default, the <literal>mkfs.lustre</literal> utility applies these
340 options to the Lustre backing file system used to store data and metadata
341 in order to enhance Lustre file system performance and scalability. These
342 options include:</para>
345 <para><literal>flex_bg</literal> - When the flag is set to enable
346 this flexible-block-groups feature, block and inode bitmaps for
347 multiple groups are aggregated to minimize seeking when bitmaps
348 are read or written and to reduce read/modify/write operations
349 on typical RAID storage (with 1 MiB RAID stripe widths). This flag
350 is enabled on both OST and MDT file systems. On MDT file systems
351 the <literal>flex_bg</literal> factor is left at the default value
352 of 16. On OSTs, the <literal>flex_bg</literal> factor is set
353 to 256 to allow all of the block or inode bitmaps in a single
354 <literal>flex_bg</literal> to be read or written in a single
355 1MiB I/O typical for RAID storage.</para>
358 <para><literal>huge_file</literal> - Setting this flag allows
359 files on OSTs to be larger than 2 TiB in size.</para>
362 <para><literal>lazy_journal_init</literal> - This extended option
363 is enabled to prevent a full overwrite to zero out the large
364 journal that is allocated by default in a Lustre file system
365 (up to 400 MiB for OSTs, up to 4GiB for MDTs), to reduce the
366 formatting time.</para>
369 <para>To override the default formatting options, use arguments to
370 <literal>mkfs.lustre</literal> to pass formatting options to the backing file system:</para>
371 <screen>--mkfsoptions='backing fs options'</screen>
372 <para>For other <literal>mkfs.lustre</literal> options, see the Linux man page for
373 <literal>mke2fs(8)</literal>.</para>
374 <section xml:id="dbdoclet.ldiskfs_mdt_mkfs">
376 <primary>inodes</primary>
377 <secondary>MDS</secondary>
378 </indexterm><indexterm>
379 <primary>setup</primary>
380 <secondary>inodes</secondary>
381 </indexterm>Setting Formatting Options for an ldiskfs MDT</title>
382 <para>The number of inodes on the MDT is determined at format time
383 based on the total size of the file system to be created. The default
384 <emphasis role="italic">bytes-per-inode</emphasis> ratio ("inode ratio")
385 for an ldiskfs MDT is optimized at one inode for every 2048 bytes of file
387 <para>This setting takes into account the space needed for additional
388 ldiskfs filesystem-wide metadata, such as the journal (up to 4 GB),
389 bitmaps, and directories, as well as files that Lustre uses internally
390 to maintain cluster consistency. There is additional per-file metadata
391 such as file layout for files with a large number of stripes, Access
392 Control Lists (ACLs), and user extended attributes.</para>
393 <para condition="l2B"> Starting in Lustre 2.11, the <xref linkend=
394 "dataonmdt.title"/> feature allows storing small files on the MDT
395 to take advantage of high-performance flash storage, as well as reduce
396 space and network overhead. If you are planning to use the DoM feature
397 with an ldiskfs MDT, it is recommended to <emphasis>increase</emphasis>
398 the inode ratio to have enough space on the MDT for small files.</para>
399 <para>It is possible to change the recommended 2048 bytes
400 per inode for an ldiskfs MDT when it is first formatted by adding the
401 <literal>--mkfsoptions="-i bytes-per-inode"</literal> option to
402 <literal>mkfs.lustre</literal>. Decreasing the inode ratio tunable
403 <literal>bytes-per-inode</literal> will create more inodes for a given
404 MDT size, but will leave less space for extra per-file metadata and is
405 not recommended. The inode ratio must always be strictly larger than
406 the MDT inode size, which is 1024 bytes by default. It is recommended
407 to use an inode ratio at least 1024 bytes larger than the inode size to
408 ensure the MDT does not run out of space. Increasing the inode ratio
409 to at least hold the most common file size (e.g. 5120 or 66560 bytes if
410 4KB or 64KB files are widely used) is recommended for DoM.</para>
411 <para>The size of the inode may be changed by adding the
412 <literal>--stripe-count-hint=N</literal> to have
413 <literal>mkfs.lustre</literal> automatically calculate a reasonable
414 inode size based on the default stripe count that will be used by the
415 filesystem, or directly by specifying the
416 <literal>--mkfsoptions="-I inode-size"</literal> option. Increasing
417 the inode size will provide more space in the inode for a larger Lustre
418 file layout, ACLs, user and system extended attributes, SELinux and
419 other security labels, and other internal metadata. However, if these
420 features or other in-inode xattrs are not needed, the larger inode size
421 will hurt metadata performance as 2x, 4x, or 8x as much data would be
422 read or written for each MDT inode access.
425 <section xml:id="dbdoclet.ldiskfs_ost_mkfs">
427 <primary>inodes</primary>
428 <secondary>OST</secondary>
429 </indexterm>Setting Formatting Options for an ldiskfs OST</title>
430 <para>When formatting an OST file system, it can be beneficial
431 to take local file system usage into account. When doing so, try to
432 reduce the number of inodes on each OST, while keeping enough margin
433 for potential variations in future usage. This helps reduce the format
434 and file system check time and makes more space available for data.</para>
435 <para>The table below shows the default
436 <emphasis role="italic">bytes-per-inode</emphasis> ratio ("inode ratio")
437 used for OSTs of various sizes when they are formatted.</para>
439 <table frame="all" xml:id="settinguplustresystem.tab1">
440 <title>Default Inode Ratios Used for Newly Formatted OSTs</title>
442 <colspec colname="c1" colwidth="3*"/>
443 <colspec colname="c2" colwidth="2*"/>
444 <colspec colname="c3" colwidth="4*"/>
448 <para><emphasis role="bold">LUN/OST size</emphasis></para>
451 <para><emphasis role="bold">Default Inode ratio</emphasis></para>
454 <para><emphasis role="bold">Total inodes</emphasis></para>
461 <para>under 10GiB </para>
464 <para>1 inode/16KiB </para>
467 <para>640 - 655k </para>
472 <para>10GiB - 1TiB </para>
475 <para>1 inode/68KiB </para>
478 <para>153k - 15.7M </para>
483 <para>1TiB - 8TiB </para>
486 <para>1 inode/256KiB </para>
489 <para>4.2M - 33.6M </para>
494 <para>over 8TiB </para>
497 <para>1 inode/1MiB </para>
500 <para>8.4M - 268M </para>
507 <para>In environments with few small files, the default inode ratio
508 may result in far too many inodes for the average file size. In this
509 case, performance can be improved by increasing the number of
510 <emphasis role="italic">bytes-per-inode</emphasis>. To set the inode
511 ratio, use the <literal>--mkfsoptions="-i <replaceable>bytes-per-inode</replaceable>"</literal>
512 argument to <literal>mkfs.lustre</literal> to specify the expected
513 average (mean) size of OST objects. For example, to create an OST
514 with an expected average object size of 8 MiB run:
515 <screen>[oss#] mkfs.lustre --ost --mkfsoptions="-i $((8192 * 1024))" ...</screen>
518 <para>OSTs formatted with ldiskfs are limited to a maximum of
519 320 million to 1 billion objects. Specifying a very small
520 bytes-per-inode ratio for a large OST that causes this limit to be
521 exceeded can cause either premature out-of-space errors and prevent
522 the full OST space from being used, or will waste space and slow down
523 e2fsck more than necessary. The default inode ratios are chosen to
524 ensure that the total number of inodes remain below this limit.
528 <para>File system check time on OSTs is affected by a number of
529 variables in addition to the number of inodes, including the size of
530 the file system, the number of allocated blocks, the distribution of
531 allocated blocks on the disk, disk speed, CPU speed, and the amount
532 of RAM on the server. Reasonable file system check times for valid
533 filesystems are 5-30 minutes per TiB, but may increase significantly
534 if substantial errors are detected and need to be required.</para>
536 <para>For more details about formatting MDT and OST file systems,
537 see <xref linkend="dbdoclet.ldiskfs_raid_opts"/>.</para>
542 <primary>setup</primary>
543 <secondary>limits</secondary>
544 </indexterm><indexterm xmlns:xi="http://www.w3.org/2001/XInclude">
545 <primary>wide striping</primary>
546 </indexterm><indexterm xmlns:xi="http://www.w3.org/2001/XInclude">
547 <primary>xattr</primary>
548 <secondary><emphasis role="italic">See</emphasis> wide striping</secondary>
549 </indexterm><indexterm>
550 <primary>large_xattr</primary>
551 <secondary>ea_inode</secondary>
552 </indexterm><indexterm>
553 <primary>wide striping</primary>
554 <secondary>large_xattr</secondary>
555 <tertiary>ea_inode</tertiary>
556 </indexterm>File and File System Limits</title>
558 <para><xref linkend="settinguplustresystem.tab2"/> describes
559 current known limits of Lustre. These limits are imposed by either
560 the Lustre architecture or the Linux virtual file system (VFS) and
561 virtual memory subsystems. In a few cases, a limit is defined within
562 the code and can be changed by re-compiling the Lustre software.
563 Instructions to install from source code are beyond the scope of this
564 document, and can be found elsewhere online. In these cases, the
565 indicated limit was used for testing of the Lustre software. </para>
567 <table frame="all" xml:id="settinguplustresystem.tab2">
568 <title>File and file system limits</title>
570 <colspec colname="c1" colwidth="3*"/>
571 <colspec colname="c2" colwidth="2*"/>
572 <colspec colname="c3" colwidth="4*"/>
576 <para><emphasis role="bold">Limit</emphasis></para>
579 <para><emphasis role="bold">Value</emphasis></para>
582 <para><emphasis role="bold">Description</emphasis></para>
589 <para>Maximum number of MDTs</para>
592 <para condition='l24'>256</para>
595 <para>The Lustre software release 2.3 and earlier allows a
596 maximum of 1 MDT per file system, but a single MDS can host
597 multiple MDTs, each one for a separate file system.</para>
598 <para condition="l24">The Lustre software release 2.4 and later
599 requires one MDT for the filesystem root. At least 255 more
600 MDTs can be added to the filesystem and attached into
601 the namespace with DNE remote or striped directories.</para>
606 <para>Maximum number of OSTs</para>
612 <para>The maximum number of OSTs is a constant that can be
613 changed at compile time. Lustre file systems with up to
614 4000 OSTs have been tested. Multiple OST file systems can
615 be configured on a single OSS node.</para>
620 <para>Maximum OST size</para>
623 <para>256TiB (ldiskfs), 256TiB (ZFS)</para>
626 <para>This is not a <emphasis>hard</emphasis> limit. Larger
627 OSTs are possible but most production systems do not
628 typically go beyond the stated limit per OST because Lustre
629 can add capacity and performance with additional OSTs, and
630 having more OSTs improves aggregate I/O performance,
631 minimizes contention, and allows parallel recovery (e2fsck
632 for ldiskfs OSTs, scrub for ZFS OSTs).
635 With 32-bit kernels, due to page cache limits, 16TB is the
636 maximum block device size, which in turn applies to the
637 size of OST. It is strongly recommended to run Lustre
638 clients and servers with 64-bit kernels.</para>
643 <para>Maximum number of clients</para>
649 <para>The maximum number of clients is a constant that can
650 be changed at compile time. Up to 30000 clients have been
651 used in production accessing a single filesystem.</para>
656 <para>Maximum size of a single file system</para>
659 <para>at least 1EiB</para>
662 <para>Each OST can have a file system up to the
663 Maximum OST size limit, and the Maximum number of OSTs
664 can be combined into a single filesystem.
670 <para>Maximum stripe count</para>
676 <para>This limit is imposed by the size of the layout that
677 needs to be stored on disk and sent in RPC requests, but is
678 not a hard limit of the protocol. The number of OSTs in the
679 filesystem can exceed the stripe count, but this limits the
680 number of OSTs across which a single file can be striped.</para>
685 <para>Maximum stripe size</para>
688 <para>< 4 GiB</para>
691 <para>The amount of data written to each object before moving
692 on to next object.</para>
697 <para>Minimum stripe size</para>
703 <para>Due to the use of 64 KiB PAGE_SIZE on some CPU
704 architectures such as ARM and POWER, the minimum stripe
705 size is 64 KiB so that a single page is not split over
706 multiple servers.</para>
711 <para>Maximum object size</para>
714 <para>16TiB (ldiskfs), 256TiB (ZFS)</para>
717 <para>The amount of data that can be stored in a single object.
718 An object corresponds to a stripe. The ldiskfs limit of 16 TB
719 for a single object applies. For ZFS the limit is the size of
720 the underlying OST. Files can consist of up to 2000 stripes,
721 each stripe can be up to the maximum object size. </para>
726 <para>Maximum <anchor xml:id="dbdoclet.50438256_marker-1290761" xreflabel=""/>file size</para>
729 <para>16 TiB on 32-bit systems</para>
731 <para>31.25 PiB on 64-bit ldiskfs systems,
732 8EiB on 64-bit ZFS systems</para>
735 <para>Individual files have a hard limit of nearly 16 TiB on
736 32-bit systems imposed by the kernel memory subsystem. On
737 64-bit systems this limit does not exist. Hence, files can
738 be 2^63 bits (8EiB) in size if the backing filesystem can
739 support large enough objects.</para>
740 <para>A single file can have a maximum of 2000 stripes, which
741 gives an upper single file limit of 31.25 PiB for 64-bit
742 ldiskfs systems. The actual amount of data that can be stored
743 in a file depends upon the amount of free space in each OST
744 on which the file is striped.</para>
749 <para>Maximum number of files or subdirectories in a single directory</para>
752 <para>10 million files (ldiskfs), 2^48 (ZFS)</para>
755 <para>The Lustre software uses the ldiskfs hashed directory
756 code, which has a limit of about 10 million files, depending
757 on the length of the file name. The limit on subdirectories
758 is the same as the limit on regular files.</para>
759 <note condition='l28'><para>Starting in the 2.8 release it is
760 possible to exceed this limit by striping a single directory
761 over multiple MDTs with the <literal>lfs mkdir -c</literal>
762 command, which increases the single directory limit by a
763 factor of the number of directory stripes used.</para></note>
764 <para>Lustre file systems are tested with ten million files
765 in a single directory.</para>
770 <para>Maximum number of files in the file system</para>
773 <para>4 billion (ldiskfs), 256 trillion (ZFS)</para>
774 <para condition='l24'>up to 256 times the per-MDT limit</para>
777 <para>The ldiskfs filesystem imposes an upper limit of
778 4 billion inodes per filesystem. By default, the MDT
779 filesystem is formatted with one inode per 2KB of space,
780 meaning 512 million inodes per TiB of MDT space. This can be
781 increased initially at the time of MDT filesystem creation.
782 For more information, see
783 <xref linkend="settinguplustresystem"/>.</para>
784 <para condition="l24">The ZFS filesystem dynamically allocates
785 inodes and does not have a fixed ratio of inodes per unit of MDT
786 space, but consumes approximately 4KiB of mirrored space per
787 inode, depending on the configuration.</para>
788 <para condition="l24">Each additional MDT can hold up to the
789 above maximum number of additional files, depending on
790 available space and the distribution directories and files
791 in the filesystem.</para>
796 <para>Maximum length of a filename</para>
799 <para>255 bytes (filename)</para>
802 <para>This limit is 255 bytes for a single filename, the
803 same as the limit in the underlying filesystems.</para>
808 <para>Maximum length of a pathname</para>
811 <para>4096 bytes (pathname)</para>
814 <para>The Linux VFS imposes a full pathname length of 4096 bytes.</para>
819 <para>Maximum number of open files for a Lustre file system</para>
822 <para>No limit</para>
825 <para>The Lustre software does not impose a maximum for the number
826 of open files, but the practical limit depends on the amount of
827 RAM on the MDS. No "tables" for open files exist on the
828 MDS, as they are only linked in a list to a given client's
829 export. Each client process probably has a limit of several
830 thousands of open files which depends on the ulimit.</para>
837 <note><para>By default for ldiskfs MDTs the maximum stripe count for a
838 <emphasis>single file</emphasis> is limited to 160 OSTs. In order to
839 increase the maximum file stripe count, use
840 <literal>--mkfsoptions="-O ea_inode"</literal> when formatting the MDT,
841 or use <literal>tune2fs -O ea_inode</literal> to enable it after the
842 MDT has been formatted.</para>
845 <section xml:id="dbdoclet.50438256_26456">
846 <title><indexterm><primary>setup</primary><secondary>memory</secondary></indexterm>Determining Memory Requirements</title>
847 <para>This section describes the memory requirements for each Lustre file system component.</para>
850 <indexterm><primary>setup</primary><secondary>memory</secondary><tertiary>client</tertiary></indexterm>
851 Client Memory Requirements</title>
852 <para>A minimum of 2 GB RAM is recommended for clients.</para>
855 <title><indexterm><primary>setup</primary><secondary>memory</secondary><tertiary>MDS</tertiary></indexterm>MDS Memory Requirements</title>
856 <para>MDS memory requirements are determined by the following factors:</para>
859 <para>Number of clients</para>
862 <para>Size of the directories</para>
865 <para>Load placed on server</para>
868 <para>The amount of memory used by the MDS is a function of how many clients are on the system, and how many files they are using in their working set. This is driven, primarily, by the number of locks a client can hold at one time. The number of locks held by clients varies by load and memory availability on the server. Interactive clients can hold in excess of 10,000 locks at times. On the MDS, memory usage is approximately 2 KB per file, including the Lustre distributed lock manager (DLM) lock and kernel data structures for the files currently in use. Having file data in cache can improve metadata performance by a factor of 10x or more compared to reading it from disk.</para>
869 <para>MDS memory requirements include:</para>
872 <para><emphasis role="bold">File system metadata</emphasis> : A reasonable amount of RAM needs to be available for file system metadata. While no hard limit can be placed on the amount of file system metadata, if more RAM is available, then the disk I/O is needed less often to retrieve the metadata.</para>
875 <para><emphasis role="bold">Network transport</emphasis> : If you are using TCP or other network transport that uses system memory for send/receive buffers, this memory requirement must also be taken into consideration.</para>
878 <para><emphasis role="bold">Journal size</emphasis> : By default, the journal size is 400 MB for each Lustre ldiskfs file system. This can pin up to an equal amount of RAM on the MDS node per file system.</para>
881 <para><emphasis role="bold">Failover configuration</emphasis> : If the MDS node will be used for failover from another node, then the RAM for each journal should be doubled, so the backup server can handle the additional load if the primary server fails.</para>
885 <title><indexterm><primary>setup</primary><secondary>memory</secondary><tertiary>MDS</tertiary></indexterm>Calculating MDS Memory Requirements</title>
886 <para>By default, 400 MB are used for the file system journal. Additional RAM is used for caching file data for the larger working set, which is not actively in use by clients but should be kept "hot" for improved access times. Approximately 1.5 KB per file is needed to keep a file in cache without a lock.</para>
887 <para>For example, for a single MDT on an MDS with 1,000 clients, 16 interactive nodes, and a 2 million file working set (of which 400,000 files are cached on the clients):</para>
889 <para>Operating system overhead = 512 MB</para>
890 <para>File system journal = 400 MB</para>
891 <para>1000 * 4-core clients * 100 files/core * 2kB = 800 MB</para>
892 <para>16 interactive clients * 10,000 files * 2kB = 320 MB</para>
893 <para>1,600,000 file extra working set * 1.5kB/file = 2400 MB</para>
895 <para>Thus, the minimum requirement for a system with this configuration is at least 4 GB of RAM. However, additional memory may significantly improve performance.</para>
896 <para>For directories containing 1 million or more files, more memory may provide a significant benefit. For example, in an environment where clients randomly access one of 10 million files, having extra memory for the cache significantly improves performance.</para>
900 <title><indexterm><primary>setup</primary><secondary>memory</secondary><tertiary>OSS</tertiary></indexterm>OSS Memory Requirements</title>
901 <para>When planning the hardware for an OSS node, consider the memory usage of several
902 components in the Lustre file system (i.e., journal, service threads, file system metadata,
903 etc.). Also, consider the effect of the OSS read cache feature, which consumes memory as it
904 caches data on the OSS node.</para>
905 <para>In addition to the MDS memory requirements mentioned in <xref linkend="dbdoclet.50438256_87676"/>, the OSS requirements include:</para>
908 <para><emphasis role="bold">Service threads</emphasis> : The service threads on the OSS node pre-allocate a 4 MB I/O buffer for each ost_io service thread, so these buffers do not need to be allocated and freed for each I/O request.</para>
911 <para><emphasis role="bold">OSS read cache</emphasis> : OSS read cache provides read-only
912 caching of data on an OSS, using the regular Linux page cache to store the data. Just
913 like caching from a regular file system in the Linux operating system, OSS read cache
914 uses as much physical memory as is available.</para>
917 <para>The same calculation applies to files accessed from the OSS as for the MDS, but the load is distributed over many more OSSs nodes, so the amount of memory required for locks, inode cache, etc. listed under MDS is spread out over the OSS nodes.</para>
918 <para>Because of these memory requirements, the following calculations should be taken as determining the absolute minimum RAM required in an OSS node.</para>
920 <title><indexterm><primary>setup</primary><secondary>memory</secondary><tertiary>OSS</tertiary></indexterm>Calculating OSS Memory Requirements</title>
921 <para>The minimum recommended RAM size for an OSS with two OSTs is computed below:</para>
923 <para>Ethernet/TCP send/receive buffers (4 MB * 512 threads) = 2048 MB</para>
924 <para>400 MB journal size * 2 OST devices = 800 MB</para>
925 <para>1.5 MB read/write per OST IO thread * 512 threads = 768 MB</para>
926 <para>600 MB file system read cache * 2 OSTs = 1200 MB</para>
927 <para>1000 * 4-core clients * 100 files/core * 2kB = 800MB</para>
928 <para>16 interactive clients * 10,000 files * 2kB = 320MB</para>
929 <para>1,600,000 file extra working set * 1.5kB/file = 2400MB</para>
930 <para> DLM locks + file system metadata TOTAL = 3520MB</para>
931 <para>Per OSS DLM locks + file system metadata = 3520MB/6 OSS = 600MB (approx.)</para>
932 <para>Per OSS RAM minimum requirement = 4096MB (approx.)</para>
934 <para>This consumes about 1,400 MB just for the pre-allocated buffers, and an additional 2 GB for minimal file system and kernel usage. Therefore, for a non-failover configuration, the minimum RAM would be 4 GB for an OSS node with two OSTs. Adding additional memory on the OSS will improve the performance of reading smaller, frequently-accessed files.</para>
935 <para>For a failover configuration, the minimum RAM would be at least 6 GB. For 4 OSTs on each OSS in a failover configuration 10GB of RAM is reasonable. When the OSS is not handling any failed-over OSTs the extra RAM will be used as a read cache.</para>
936 <para>As a reasonable rule of thumb, about 2 GB of base memory plus 1 GB per OST can be used. In failover configurations, about 2 GB per OST is needed.</para>
940 <section xml:id="dbdoclet.50438256_78272">
942 <primary>setup</primary>
943 <secondary>network</secondary>
944 </indexterm>Implementing Networks To Be Used by the Lustre File System</title>
945 <para>As a high performance file system, the Lustre file system places heavy loads on networks.
946 Thus, a network interface in each Lustre server and client is commonly dedicated to Lustre
947 file system traffic. This is often a dedicated TCP/IP subnet, although other network hardware
948 can also be used.</para>
949 <para>A typical Lustre file system implementation may include the following:</para>
952 <para>A high-performance backend network for the Lustre servers, typically an InfiniBand (IB) network.</para>
955 <para>A larger client network.</para>
958 <para>Lustre routers to connect the two networks.</para>
961 <para>Lustre networks and routing are configured and managed by specifying parameters to the
962 Lustre Networking (<literal>lnet</literal>) module in
963 <literal>/etc/modprobe.d/lustre.conf</literal>.</para>
964 <para>To prepare to configure Lustre networking, complete the following steps:</para>
967 <para><emphasis role="bold">Identify all machines that will be running Lustre software and
968 the network interfaces they will use to run Lustre file system traffic. These machines
969 will form the Lustre network .</emphasis></para>
970 <para>A network is a group of nodes that communicate directly with one another. The Lustre
971 software includes Lustre network drivers (LNDs) to support a variety of network types and
972 hardware (see <xref linkend="understandinglustrenetworking"/> for a complete list). The
973 standard rules for specifying networks applies to Lustre networks. For example, two TCP
974 networks on two different subnets (<literal>tcp0</literal> and <literal>tcp1</literal>)
975 are considered to be two different Lustre networks.</para>
978 <para><emphasis role="bold">If routing is needed, identify the nodes to be used to route traffic between networks.</emphasis></para>
979 <para>If you are using multiple network types, then you will need a router. Any node with
980 appropriate interfaces can route Lustre networking (LNet) traffic between different
981 network hardware types or topologies --the node may be a server, a client, or a standalone
982 router. LNet can route messages between different network types (such as
983 TCP-to-InfiniBand) or across different topologies (such as bridging two InfiniBand or
984 TCP/IP networks). Routing will be configured in <xref linkend="configuringlnet"/>.</para>
987 <para><emphasis role="bold">Identify the network interfaces to include
988 in or exclude from LNet.</emphasis></para>
989 <para>If not explicitly specified, LNet uses either the first available
990 interface or a pre-defined default for a given network type. Interfaces
991 that LNet should not use (such as an administrative network or
992 IP-over-IB), can be excluded.</para>
993 <para>Network interfaces to be used or excluded will be specified using
994 the lnet kernel module parameters <literal>networks</literal> and
995 <literal>ip2nets</literal> as described in
996 <xref linkend="configuringlnet"/>.</para>
999 <para><emphasis role="bold">To ease the setup of networks with complex
1000 network configurations, determine a cluster-wide module configuration.
1002 <para>For large clusters, you can configure the networking setup for
1003 all nodes by using a single, unified set of parameters in the
1004 <literal>lustre.conf</literal> file on each node. Cluster-wide
1005 configuration is described in <xref linkend="configuringlnet"/>.</para>
1009 <para>We recommend that you use 'dotted-quad' notation for IP addresses rather than host names to make it easier to read debug logs and debug configurations with multiple interfaces.</para>