1 <?xml version='1.0' encoding='utf-8'?>
2 <chapter xmlns="http://docbook.org/ns/docbook"
3 xmlns:xl="http://www.w3.org/1999/xlink" version="5.0" xml:lang="en-US"
4 xml:id="understandinglustre">
5 <title xml:id="understandinglustre.title">Understanding Lustre
7 <para>This chapter describes the Lustre architecture and features of the
8 Lustre file system. It includes the following sections:</para>
12 <xref linkend="understandinglustre.whatislustre" />
17 <xref linkend="understandinglustre.components" />
22 <xref linkend="understandinglustre.storageio" />
26 <section xml:id="understandinglustre.whatislustre">
29 <primary>Lustre</primary>
30 </indexterm>What a Lustre File System Is (and What It Isn't)</title>
31 <para>The Lustre architecture is a storage architecture for clusters. The
32 central component of the Lustre architecture is the Lustre file system,
33 which is supported on the Linux operating system and provides a POSIX
34 <superscript>*</superscript>standard-compliant UNIX file system
36 <para>The Lustre storage architecture is used for many different kinds of
37 clusters. It is best known for powering many of the largest
38 high-performance computing (HPC) clusters worldwide, with tens of thousands
39 of client systems, petabytes (PB) of storage and hundreds of gigabytes per
40 second (GB/sec) of I/O throughput. Many HPC sites use a Lustre file system
41 as a site-wide global file system, serving dozens of clusters.</para>
42 <para>The ability of a Lustre file system to scale capacity and performance
43 for any need reduces the need to deploy many separate file systems, such as
44 one for each compute cluster. Storage management is simplified by avoiding
45 the need to copy data between compute clusters. In addition to aggregating
46 storage capacity of many servers, the I/O throughput is also aggregated and
47 scales with additional servers. Moreover, throughput and/or capacity can be
48 easily increased by adding servers dynamically.</para>
49 <para>While a Lustre file system can function in many work environments, it
50 is not necessarily the best choice for all applications. It is best suited
51 for uses that exceed the capacity that a single server can provide, though
52 in some use cases, a Lustre file system can perform better with a single
53 server than other file systems due to its strong locking and data
55 <para>A Lustre file system is currently not particularly well suited for
56 "peer-to-peer" usage models where clients and servers are running on the
57 same node, each sharing a small amount of storage, due to the lack of data
58 replication at the Lustre software level. In such uses, if one
59 client/server fails, then the data stored on that node will not be
60 accessible until the node is restarted.</para>
64 <primary>Lustre</primary>
65 <secondary>features</secondary>
66 </indexterm>Lustre Features</title>
67 <para>Lustre file systems run on a variety of vendor's kernels. For more
68 details, see the Lustre Test Matrix
69 <xref xmlns:xlink="http://www.w3.org/1999/xlink"
70 linkend="dbdoclet.50438261_99193" />.</para>
71 <para>A Lustre installation can be scaled up or down with respect to the
72 number of client nodes, disk storage and bandwidth. Scalability and
73 performance are dependent on available disk and network bandwidth and the
74 processing power of the servers in the system. A Lustre file system can
75 be deployed in a wide variety of configurations that can be scaled well
76 beyond the size and performance observed in production systems to
79 <xref linkend="understandinglustre.tab1" />shows the practical range of
80 scalability and performance characteristics of a Lustre file system and
81 some test results in production systems.</para>
83 <title xml:id="understandinglustre.tab1">Lustre File System Scalability
84 and Performance</title>
86 <colspec colname="c1" colwidth="1*" />
87 <colspec colname="c2" colwidth="2*" />
88 <colspec colname="c3" colwidth="3*" />
93 <emphasis role="bold">Feature</emphasis>
98 <emphasis role="bold">Current Practical Range</emphasis>
103 <emphasis role="bold">Known Production Usage</emphasis>
112 <emphasis role="bold">Client Scalability</emphasis>
116 <para>100-100000</para>
119 <para>50000+ clients, many in the 10000 to 20000 range</para>
125 <emphasis role="bold">Client Performance</emphasis>
130 <emphasis>Single client:</emphasis>
132 <para>I/O 90% of network bandwidth</para>
134 <emphasis>Aggregate:</emphasis>
136 <para>2.5 TB/sec I/O</para>
140 <emphasis>Single client:</emphasis>
142 <para>2 GB/sec I/O, 1000 metadata ops/sec</para>
144 <emphasis>Aggregate:</emphasis>
146 <para>2.5 TB/sec I/O </para>
152 <emphasis role="bold">OSS Scalability</emphasis>
157 <emphasis>Single OSS:</emphasis>
159 <para>1-32 OSTs per OSS,</para>
160 <para>128TB per OST</para>
162 <emphasis>OSS count:</emphasis>
164 <para>1000 OSSs, with up to 4000 OSTs</para>
168 <emphasis>Single OSS:</emphasis>
170 <para>32x 8TB OSTs per OSS,</para>
171 <para>8x 32TB OSTs per OSS</para>
173 <emphasis>OSS count:</emphasis>
175 <para>450 OSSs with 1000 4TB OSTs</para>
176 <para>192 OSSs with 1344 8TB OSTs</para>
177 <para>768 OSSs with 768 72TB OSTs</para>
183 <emphasis role="bold">OSS Performance</emphasis>
188 <emphasis>Single OSS:</emphasis>
190 <para>5 GB/sec</para>
192 <emphasis>Aggregate:</emphasis>
194 <para>10 TB/sec</para>
198 <emphasis>Single OSS:</emphasis>
200 <para>2.0+ GB/sec</para>
202 <emphasis>Aggregate:</emphasis>
204 <para>2.5 TB/sec</para>
210 <emphasis role="bold">MDS Scalability</emphasis>
215 <emphasis>Single MDT:</emphasis>
217 <para>4 billion files (ldiskfs), 256 trillion files
220 <emphasis>MDS count:</emphasis>
222 <para>1 primary + 1 backup</para>
223 <para condition="l24">Up to 256 MDTs and up to 256 MDSs</para>
227 <emphasis>Single MDT:</emphasis>
229 <para>2 billion files</para>
231 <emphasis>MDS count:</emphasis>
233 <para>1 primary + 1 backup</para>
239 <emphasis role="bold">MDS Performance</emphasis>
243 <para>50000/s create operations,</para>
244 <para>200000/s metadata stat operations</para>
247 <para>15000/s create operations,</para>
248 <para>50000/s metadata stat operations</para>
254 <emphasis role="bold">File system Scalability</emphasis>
259 <emphasis>Single File:</emphasis>
261 <para>32 PB max file size (ldiskfs), 2^63 bytes (ZFS)</para>
263 <emphasis>Aggregate:</emphasis>
265 <para>512 PB space, 32 billion files</para>
269 <emphasis>Single File:</emphasis>
271 <para>multi-TB max file size</para>
273 <emphasis>Aggregate:</emphasis>
275 <para>55 PB space, 2 billion files</para>
281 <para>Other Lustre software features are:</para>
285 <emphasis role="bold">Performance-enhanced ext4 file
286 system:</emphasis>The Lustre file system uses an improved version of
287 the ext4 journaling file system to store data and metadata. This
289 <emphasis role="italic">
290 <literal>ldiskfs</literal>
291 </emphasis>, has been enhanced to improve performance and provide
292 additional functionality needed by the Lustre file system.</para>
295 <para condition="l24">With the Lustre software release 2.4 and later,
296 it is also possible to use ZFS as the backing filesystem for Lustre
297 for the MDT, OST, and MGS storage. This allows Lustre to leverage the
298 scalability and data integrity features of ZFS for individual storage
303 <emphasis role="bold">POSIX standard compliance:</emphasis>The full
304 POSIX test suite passes in an identical manner to a local ext4 file
305 system, with limited exceptions on Lustre clients. In a cluster, most
306 operations are atomic so that clients never see stale data or
307 metadata. The Lustre software supports mmap() file I/O.</para>
311 <emphasis role="bold">High-performance heterogeneous
312 networking:</emphasis>The Lustre software supports a variety of high
313 performance, low latency networks and permits Remote Direct Memory
314 Access (RDMA) for InfiniBand
315 <superscript>*</superscript>(utilizing OpenFabrics Enterprise
317 <superscript>*</superscript>) and other advanced networks for fast
318 and efficient network transport. Multiple RDMA networks can be
319 bridged using Lustre routing for maximum performance. The Lustre
320 software also includes integrated network diagnostics.</para>
324 <emphasis role="bold">High-availability:</emphasis>The Lustre file
325 system supports active/active failover using shared storage
326 partitions for OSS targets (OSTs). Lustre software release 2.3 and
327 earlier releases offer active/passive failover using a shared storage
328 partition for the MDS target (MDT). The Lustre file system can work
329 with a variety of high availability (HA) managers to allow automated
330 failover and has no single point of failure (NSPF). This allows
331 application transparent recovery. Multiple mount protection (MMP)
332 provides integrated protection from errors in highly-available
333 systems that would otherwise cause file system corruption.</para>
336 <para condition="l24">With Lustre software release 2.4 or later
337 servers and clients it is possible to configure active/active
338 failover of multiple MDTs. This allows scaling the metadata
339 performance of Lustre filesystems with the addition of MDT storage
340 devices and MDS nodes.</para>
344 <emphasis role="bold">Security:</emphasis>By default TCP connections
345 are only allowed from privileged ports. UNIX group membership is
346 verified on the MDS.</para>
350 <emphasis role="bold">Access control list (ACL), extended
351 attributes:</emphasis>the Lustre security model follows that of a
352 UNIX file system, enhanced with POSIX ACLs. Noteworthy additional
353 features include root squash.</para>
357 <emphasis role="bold">Interoperability:</emphasis>The Lustre file
358 system runs on a variety of CPU architectures and mixed-endian
359 clusters and is interoperable between successive major Lustre
360 software releases.</para>
364 <emphasis role="bold">Object-based architecture:</emphasis>Clients
365 are isolated from the on-disk file structure enabling upgrading of
366 the storage architecture without affecting the client.</para>
370 <emphasis role="bold">Byte-granular file and fine-grained metadata
371 locking:</emphasis>Many clients can read and modify the same file or
372 directory concurrently. The Lustre distributed lock manager (LDLM)
373 ensures that files are coherent between all clients and servers in
374 the file system. The MDT LDLM manages locks on inode permissions and
375 pathnames. Each OST has its own LDLM for locks on file stripes stored
376 thereon, which scales the locking performance as the file system
381 <emphasis role="bold">Quotas:</emphasis>User and group quotas are
382 available for a Lustre file system.</para>
386 <emphasis role="bold">Capacity growth:</emphasis>The size of a Lustre
387 file system and aggregate cluster bandwidth can be increased without
388 interruption by adding a new OSS with OSTs to the cluster.</para>
392 <emphasis role="bold">Controlled striping:</emphasis>The layout of
393 files across OSTs can be configured on a per file, per directory, or
394 per file system basis. This allows file I/O to be tuned to specific
395 application requirements within a single file system. The Lustre file
396 system uses RAID-0 striping and balances space usage across
401 <emphasis role="bold">Network data integrity protection:</emphasis>A
402 checksum of all data sent from the client to the OSS protects against
403 corruption during data transfer.</para>
407 <emphasis role="bold">MPI I/O:</emphasis>The Lustre architecture has
408 a dedicated MPI ADIO layer that optimizes parallel I/O to match the
409 underlying file system architecture.</para>
413 <emphasis role="bold">NFS and CIFS export:</emphasis>Lustre files can
414 be re-exported using NFS (via Linux knfsd) or CIFS (via Samba)
415 enabling them to be shared with non-Linux clients, such as Microsoft
416 <superscript>*</superscript>Windows
417 <superscript>*</superscript>and Apple
418 <superscript>*</superscript>Mac OS X
419 <superscript>*</superscript>.</para>
423 <emphasis role="bold">Disaster recovery tool:</emphasis>The Lustre
424 file system provides an online distributed file system check (LFSCK)
425 that can restore consistency between storage components in case of a
426 major file system error. A Lustre file system can operate even in the
427 presence of file system inconsistencies, and LFSCK can run while the
428 filesystem is in use, so LFSCK is not required to complete before
429 returning the file system to production.</para>
433 <emphasis role="bold">Performance monitoring:</emphasis>The Lustre
434 file system offers a variety of mechanisms to examine performance and
439 <emphasis role="bold">Open source:</emphasis>The Lustre software is
440 licensed under the GPL 2.0 license for use with the Linux operating
446 <section xml:id="understandinglustre.components">
449 <primary>Lustre</primary>
450 <secondary>components</secondary>
451 </indexterm>Lustre Components</title>
452 <para>An installation of the Lustre software includes a management server
453 (MGS) and one or more Lustre file systems interconnected with Lustre
454 networking (LNET).</para>
455 <para>A basic configuration of Lustre file system components is shown in
456 <xref linkend="understandinglustre.fig.cluster" />.</para>
458 <title xml:id="understandinglustre.fig.cluster">Lustre file system
459 components in a basic cluster</title>
462 <imagedata scalefit="1" width="100%"
463 fileref="./figures/Basic_Cluster.png" />
466 <phrase>Lustre file system components in a basic cluster</phrase>
473 <primary>Lustre</primary>
474 <secondary>MGS</secondary>
475 </indexterm>Management Server (MGS)</title>
476 <para>The MGS stores configuration information for all the Lustre file
477 systems in a cluster and provides this information to other Lustre
478 components. Each Lustre target contacts the MGS to provide information,
479 and Lustre clients contact the MGS to retrieve information.</para>
480 <para>It is preferable that the MGS have its own storage space so that it
481 can be managed independently. However, the MGS can be co-located and
482 share storage space with an MDS as shown in
483 <xref linkend="understandinglustre.fig.cluster" />.</para>
486 <title>Lustre File System Components</title>
487 <para>Each Lustre file system consists of the following
492 <emphasis role="bold">Metadata Server (MDS)</emphasis>- The MDS makes
493 metadata stored in one or more MDTs available to Lustre clients. Each
494 MDS manages the names and directories in the Lustre file system(s)
495 and provides network request handling for one or more local
500 <emphasis role="bold">Metadata Target (MDT</emphasis>) - For Lustre
501 software release 2.3 and earlier, each file system has one MDT. The
502 MDT stores metadata (such as filenames, directories, permissions and
503 file layout) on storage attached to an MDS. Each file system has one
504 MDT. An MDT on a shared storage target can be available to multiple
505 MDSs, although only one can access it at a time. If an active MDS
506 fails, a standby MDS can serve the MDT and make it available to
507 clients. This is referred to as MDS failover.</para>
508 <para condition="l24">Since Lustre software release 2.4, multiple
509 MDTs are supported. Each file system has at least one MDT. An MDT on
510 a shared storage target can be available via multiple MDSs, although
511 only one MDS can export the MDT to the clients at one time. Two MDS
512 machines share storage for two or more MDTs. After the failure of one
513 MDS, the remaining MDS begins serving the MDT(s) of the failed
515 <para condition="l28">Since Lustre software release 2.8,
516 multiple MDTs can be employed to share the inode records for files
517 contained in a single directory. A directory for which inode records
518 are distributed across multiple MDTs is known as a <emphasis>striped
519 directory</emphasis>. In the case of a Lustre filesystem the inode
520 records maybe also be referred to as the 'metadata' portion of the
525 <emphasis role="bold">Object Storage Servers (OSS)</emphasis>: The
526 OSS provides file I/O service and network request handling for one or
527 more local OSTs. Typically, an OSS serves between two and eight OSTs,
528 up to 16 TB each. A typical configuration is an MDT on a dedicated
529 node, two or more OSTs on each OSS node, and a client on each of a
530 large number of compute nodes.</para>
534 <emphasis role="bold">Object Storage Target (OST)</emphasis>: User
535 file data is stored in one or more objects, each object on a separate
536 OST in a Lustre file system. The number of objects per file is
537 configurable by the user and can be tuned to optimize performance for
538 a given workload.</para>
542 <emphasis role="bold">Lustre clients</emphasis>: Lustre clients are
543 computational, visualization or desktop nodes that are running Lustre
544 client software, allowing them to mount the Lustre file
548 <para>The Lustre client software provides an interface between the Linux
549 virtual file system and the Lustre servers. The client software includes
550 a management client (MGC), a metadata client (MDC), and multiple object
551 storage clients (OSCs), one corresponding to each OST in the file
553 <para>A logical object volume (LOV) aggregates the OSCs to provide
554 transparent access across all the OSTs. Thus, a client with the Lustre
555 file system mounted sees a single, coherent, synchronized namespace.
556 Several clients can write to different parts of the same file
557 simultaneously, while, at the same time, other clients can read from the
560 <xref linkend="understandinglustre.tab.storagerequire" />provides the
561 requirements for attached storage for each Lustre file system component
562 and describes desirable characteristics of the hardware used.</para>
564 <title xml:id="understandinglustre.tab.storagerequire">
566 <primary>Lustre</primary>
567 <secondary>requirements</secondary>
568 </indexterm>Storage and hardware requirements for Lustre file system
571 <colspec colname="c1" colwidth="1*" />
572 <colspec colname="c2" colwidth="3*" />
573 <colspec colname="c3" colwidth="3*" />
578 <emphasis role="bold" />
583 <emphasis role="bold">Required attached storage</emphasis>
588 <emphasis role="bold">Desirable hardware
589 characteristics</emphasis>
598 <emphasis role="bold">MDSs</emphasis>
602 <para>1-2% of file system capacity</para>
605 <para>Adequate CPU power, plenty of memory, fast disk
612 <emphasis role="bold">OSSs</emphasis>
616 <para>1-16 TB per OST, 1-8 OSTs per OSS</para>
619 <para>Good bus bandwidth. Recommended that storage be balanced
620 evenly across OSSs.</para>
626 <emphasis role="bold">Clients</emphasis>
633 <para>Low latency, high bandwidth network.</para>
639 <para>For additional hardware requirements and considerations, see
640 <xref linkend="settinguplustresystem" />.</para>
645 <primary>Lustre</primary>
646 <secondary>LNET</secondary>
647 </indexterm>Lustre Networking (LNET)</title>
648 <para>Lustre Networking (LNET) is a custom networking API that provides
649 the communication infrastructure that handles metadata and file I/O data
650 for the Lustre file system servers and clients. For more information
652 <xref linkend="understandinglustrenetworking" />.</para>
657 <primary>Lustre</primary>
658 <secondary>cluster</secondary>
659 </indexterm>Lustre Cluster</title>
660 <para>At scale, a Lustre file system cluster can include hundreds of OSSs
661 and thousands of clients (see
662 <xref linkend="understandinglustre.fig.lustrescale" />). More than one
663 type of network can be used in a Lustre cluster. Shared storage between
664 OSSs enables failover capability. For more details about OSS failover,
666 <xref linkend="understandingfailover" />.</para>
668 <title xml:id="understandinglustre.fig.lustrescale">
670 <primary>Lustre</primary>
671 <secondary>at scale</secondary>
672 </indexterm>Lustre cluster at scale</title>
675 <imagedata scalefit="1" width="100%"
676 fileref="./figures/Scaled_Cluster.png" />
679 <phrase>Lustre file system cluster at scale</phrase>
685 <section xml:id="understandinglustre.storageio">
688 <primary>Lustre</primary>
689 <secondary>storage</secondary>
692 <primary>Lustre</primary>
693 <secondary>I/O</secondary>
694 </indexterm>Lustre File System Storage and I/O</title>
695 <para>In Lustre software release 2.0, Lustre file identifiers (FIDs) were
696 introduced to replace UNIX inode numbers for identifying files or objects.
697 A FID is a 128-bit identifier that contains a unique 64-bit sequence
698 number, a 32-bit object ID (OID), and a 32-bit version number. The sequence
699 number is unique across all Lustre targets in a file system (OSTs and
700 MDTs). This change enabled future support for multiple MDTs (introduced in
701 Lustre software release 2.4) and ZFS (introduced in Lustre software release
703 <para>Also introduced in release 2.0 is a feature call
704 <emphasis role="italic">FID-in-dirent</emphasis>(also known as
705 <emphasis role="italic">dirdata</emphasis>) in which the FID is stored as
706 part of the name of the file in the parent directory. This feature
707 significantly improves performance for
708 <literal>ls</literal> command executions by reducing disk I/O. The
709 FID-in-dirent is generated at the time the file is created.</para>
711 <para>The FID-in-dirent feature is not compatible with the Lustre
712 software release 1.8 format. Therefore, when an upgrade from Lustre
713 software release 1.8 to a Lustre software release 2.x is performed, the
714 FID-in-dirent feature is not automatically enabled. For upgrades from
715 Lustre software release 1.8 to Lustre software releases 2.0 through 2.3,
716 FID-in-dirent can be enabled manually but only takes effect for new
718 <para>For more information about upgrading from Lustre software release
719 1.8 and enabling FID-in-dirent for existing files, see
720 <xref xmlns:xlink="http://www.w3.org/1999/xlink"
721 linkend="upgradinglustre" />Chapter 16 “Upgrading a Lustre File
724 <para condition="l24">The LFSCK file system consistency checking tool
725 released with Lustre software release 2.4 provides functionality that
726 enables FID-in-dirent for existing files. It includes the following
730 <para>Generates IGIF mode FIDs for existing files from a 1.8 version
731 file system files.</para>
734 <para>Verifies the FID-in-dirent for each file and regenerates the
735 FID-in-dirent if it is invalid or missing.</para>
738 <para>Verifies the linkEA entry for each and regenerates the linkEA
739 if it is invalid or missing. The
740 <emphasis role="italic">linkEA</emphasis>consists of the file name and
741 parent FID. It is stored as an extended attribute in the file
742 itself. Thus, the linkEA can be used to reconstruct the full path name of
745 </itemizedlist></para>
746 <para>Information about where file data is located on the OST(s) is stored
747 as an extended attribute called layout EA in an MDT object identified by
748 the FID for the file (see
749 <xref xmlns:xlink="http://www.w3.org/1999/xlink"
750 linkend="Fig1.3_LayoutEAonMDT" />). If the file is a regular file (not a
751 directory or symbol link), the MDT object points to 1-to-N OST object(s) on
752 the OST(s) that contain the file data. If the MDT file points to one
753 object, all the file data is stored in that object. If the MDT file points
754 to more than one object, the file data is
755 <emphasis role="italic">striped</emphasis>across the objects using RAID 0,
756 and each object is stored on a different OST. (For more information about
757 how striping is implemented in a Lustre file system, see
758 <xref linkend="dbdoclet.50438250_89922" />.</para>
759 <figure xml:id="Fig1.3_LayoutEAonMDT">
760 <title>Layout EA on MDT pointing to file data on OSTs</title>
763 <imagedata scalefit="1" width="80%"
764 fileref="./figures/Metadata_File.png" />
767 <phrase>Layout EA on MDT pointing to file data on OSTs</phrase>
771 <para>When a client wants to read from or write to a file, it first fetches
772 the layout EA from the MDT object for the file. The client then uses this
773 information to perform I/O on the file, directly interacting with the OSS
774 nodes where the objects are stored.
775 <?oxy_custom_start type="oxy_content_highlight" color="255,255,0"?>
776 This process is illustrated in
777 <xref xmlns:xlink="http://www.w3.org/1999/xlink"
778 linkend="Fig1.4_ClientReqstgData" /><?oxy_custom_end?>
780 <figure xml:id="Fig1.4_ClientReqstgData">
781 <title>Lustre client requesting file data</title>
784 <imagedata scalefit="1" width="75%"
785 fileref="./figures/File_Write.png" />
788 <phrase>Lustre client requesting file data</phrase>
792 <para>The available bandwidth of a Lustre file system is determined as
797 <emphasis>network bandwidth</emphasis>equals the aggregated bandwidth
798 of the OSSs to the targets.</para>
802 <emphasis>disk bandwidth</emphasis>equals the sum of the disk
803 bandwidths of the storage targets (OSTs) up to the limit of the network
808 <emphasis>aggregate bandwidth</emphasis>equals the minimum of the disk
809 bandwidth and the network bandwidth.</para>
813 <emphasis>available file system space</emphasis>equals the sum of the
814 available space of all the OSTs.</para>
817 <section xml:id="dbdoclet.50438250_89922">
820 <primary>Lustre</primary>
821 <secondary>striping</secondary>
824 <primary>striping</primary>
825 <secondary>overview</secondary>
826 </indexterm>Lustre File System and Striping</title>
827 <para>One of the main factors leading to the high performance of Lustre
828 file systems is the ability to stripe data across multiple OSTs in a
829 round-robin fashion. Users can optionally configure for each file the
830 number of stripes, stripe size, and OSTs that are used.</para>
831 <para>Striping can be used to improve performance when the aggregate
832 bandwidth to a single file exceeds the bandwidth of a single OST. The
833 ability to stripe is also useful when a single OST does not have enough
834 free space to hold an entire file. For more information about benefits
835 and drawbacks of file striping, see
836 <xref linkend="dbdoclet.50438209_48033" />.</para>
837 <para>Striping allows segments or 'chunks' of data in a file to be stored
838 on different OSTs, as shown in
839 <xref linkend="understandinglustre.fig.filestripe" />. In the Lustre file
840 system, a RAID 0 pattern is used in which data is "striped" across a
841 certain number of objects. The number of objects in a single file is
843 <literal>stripe_count</literal>.</para>
844 <para>Each object contains a chunk of data from the file. When the chunk
845 of data being written to a particular object exceeds the
846 <literal>stripe_size</literal>, the next chunk of data in the file is
847 stored on the next object.</para>
848 <para>Default values for
849 <literal>stripe_count</literal> and
850 <literal>stripe_size</literal> are set for the file system. The default
852 <literal>stripe_count</literal> is 1 stripe for file and the default value
854 <literal>stripe_size</literal> is 1MB. The user may change these values on
855 a per directory or per file basis. For more details, see
856 <xref linkend="dbdoclet.50438209_78664" />.</para>
858 <xref linkend="understandinglustre.fig.filestripe" />, the
859 <literal>stripe_size</literal> for File C is larger than the
860 <literal>stripe_size</literal> for File A, allowing more data to be stored
861 in a single stripe for File C. The
862 <literal>stripe_count</literal> for File A is 3, resulting in data striped
863 across three objects, while the
864 <literal>stripe_count</literal> for File B and File C is 1.</para>
865 <para>No space is reserved on the OST for unwritten data. File A in
866 <xref linkend="understandinglustre.fig.filestripe" />.</para>
868 <title xml:id="understandinglustre.fig.filestripe">File striping on a
869 Lustre file system</title>
872 <imagedata scalefit="1" width="100%"
873 fileref="./figures/File_Striping.png" />
876 <phrase>File striping pattern across three OSTs for three different
877 data files. The file is sparse and missing chunk 6.</phrase>
881 <para>The maximum file size is not limited by the size of a single
882 target. In a Lustre file system, files can be striped across multiple
883 objects (up to 2000), and each object can be up to 16 TB in size with
884 ldiskfs, or up to 256PB with ZFS. This leads to a maximum file size of
885 31.25 PB for ldiskfs or 8EB with ZFS. Note that a Lustre file system can
886 support files up to 2^63 bytes (8EB), limited only by the space available
889 <para>Versions of the Lustre software prior to Release 2.2 limited the
890 maximum stripe count for a single file to 160 OSTs.</para>
892 <para>Although a single file can only be striped over 2000 objects,
893 Lustre file systems can have thousands of OSTs. The I/O bandwidth to
894 access a single file is the aggregated I/O bandwidth to the objects in a
895 file, which can be as much as a bandwidth of up to 2000 servers. On
896 systems with more than 2000 OSTs, clients can do I/O using multiple files
897 to utilize the full file system bandwidth.</para>
898 <para>For more information about striping, see
899 <xref linkend="managingstripingfreespace" />.</para>