1 <?xml version='1.0' encoding='UTF-8'?>
2 <chapter xmlns="http://docbook.org/ns/docbook" xmlns:xl="http://www.w3.org/1999/xlink" version="5.0"
3 xml:lang="en-US" xml:id="understandinglustre">
4 <title xml:id="understandinglustre.title">Understanding Lustre Architecture</title>
6 <para>This chapter describes the Lustre architecture and features of Lustre. It includes the
7 following sections:</para>
11 <xref linkend="understandinglustre.whatislustre"/>
16 <xref linkend="understandinglustre.components"/>
21 <xref linkend="understandinglustre.storageio"/>
25 <section xml:id="understandinglustre.whatislustre">
27 <primary>Lustre</primary>
28 </indexterm>What a Lustre File System Is (and What It Isn't)</title>
29 <para>The Lustre architecture is a storage architecture for clusters. The central component of
30 the Lustre architecture is the Lustre file system, which is supported on the Linux operating
31 system and provides a POSIX-compliant UNIX file system interface.</para>
32 <para>The Lustre storage architecture is used for many different kinds of clusters. It is best
33 known for powering many of the largest high-performance computing (HPC) clusters worldwide,
34 with tens of thousands of client systems, petabytes (PB) of storage and hundreds of gigabytes
35 per second (GB/sec) of I/O throughput. Many HPC sites use a Lustre file system as a site-wide
36 global file system, serving dozens of clusters.</para>
37 <para>The ability of a Lustre file system to scale capacity and performance for any need reduces
38 the need to deploy many separate file systems, such as one for each compute cluster. Storage
39 management is simplified by avoiding the need to copy data between compute clusters. In
40 addition to aggregating storage capacity of many servers, the I/O throughput is also
41 aggregated and scales with additional servers. Moreover, throughput and/or capacity can be
42 easily increased by adding servers dynamically.</para>
43 <para>While a Lustre file system can function in many work environments, it is not necessarily
44 the best choice for all applications. It is best suited for uses that exceed the capacity that
45 a single server can provide, though in some use cases, a Lustre file system can perform better
46 with a single server than other file systems due to its strong locking and data
48 <para>A Lustre file system is currently not particularly well suited for
49 "peer-to-peer" usage models where clients and servers are running on the same node,
50 each sharing a small amount of storage, due to the lack of Lustre-level data replication. In
51 such uses, if one client/server fails, then the data stored on that node will not be
52 accessible until the node is restarted.</para>
55 <primary>Lustre</primary>
56 <secondary>features</secondary>
57 </indexterm>Lustre Features</title>
58 <para>Lustre file systems run on a variety of vendor's kernels. For more details, see the
59 <link xl:href="http://wiki.whamcloud.com/display/PUB/Lustre+Support+Matrix">Lustre Support
60 Matrix</link> on the Intel Lustre community wiki.</para>
61 <para>A Lustre installation can be scaled up or down with respect to the number of client
62 nodes, disk storage and bandwidth. Scalability and performance are dependent on available
63 disk and network bandwidth and the processing power of the servers in the system. A Lustre
64 file system can be deployed in a wide variety of configurations that can be scaled well
65 beyond the size and performance observed in production systems to date.</para>
66 <para><xref linkend="understandinglustre.tab1"/> shows the practical range of scalability and
67 performance characteristics of a Lustre file system and some test results in production
70 <title xml:id="understandinglustre.tab1">Lustre Scalability and Performance</title>
72 <colspec colname="c1" colwidth="1*"/>
73 <colspec colname="c2" colwidth="2*"/>
74 <colspec colname="c3" colwidth="3*"/>
78 <para><emphasis role="bold">Feature</emphasis></para>
81 <para><emphasis role="bold">Current Practical Range</emphasis></para>
84 <para><emphasis role="bold">Tested in Production</emphasis></para>
92 <emphasis role="bold">Client Scalability</emphasis></para>
95 <para> 100-100000</para>
98 <para> 50000+ clients, many in the 10000 to 20000 range</para>
103 <para><emphasis role="bold">Client Performance</emphasis></para>
107 <emphasis>Single client: </emphasis></para>
108 <para>I/O 90% of network bandwidth</para>
109 <para><emphasis>Aggregate:</emphasis></para>
110 <para>2.5 TB/sec I/O</para>
114 <emphasis>Single client: </emphasis></para>
115 <para>2 GB/sec I/O, 1000 metadata ops/sec</para>
116 <para><emphasis>Aggregate:</emphasis></para>
117 <para>240 GB/sec I/O </para>
123 <emphasis role="bold">OSS Scalability</emphasis></para>
127 <emphasis>Single OSS:</emphasis></para>
128 <para>1-32 OSTs per OSS,</para>
129 <para>128TB per OST</para>
131 <emphasis>OSS count:</emphasis></para>
132 <para>500 OSSs, with up to 4000 OSTs</para>
136 <emphasis>Single OSS:</emphasis></para>
137 <para>8 OSTs per OSS,</para>
138 <para>16TB per OST</para>
140 <emphasis>OSS count:</emphasis></para>
141 <para>450 OSSs with 1000 4TB OSTs</para>
142 <para>192 OSSs with 1344 8TB OSTs</para>
148 <emphasis role="bold">OSS Performance</emphasis></para>
152 <emphasis>Single OSS:</emphasis></para>
153 <para> 5 GB/sec</para>
155 <emphasis>Aggregate:</emphasis></para>
156 <para> 2.5 TB/sec</para>
160 <emphasis>Single OSS:</emphasis></para>
161 <para> 2.0+ GB/sec</para>
163 <emphasis>Aggregate:</emphasis></para>
164 <para> 240 GB/sec</para>
170 <emphasis role="bold">MDS Scalability</emphasis></para>
174 <emphasis>Single MDS:</emphasis></para>
175 <para> 4 billion files</para>
177 <emphasis>MDS count:</emphasis></para>
178 <para> 1 primary + 1 backup</para>
179 <para condition="l24">Since Lustre* Release 2.4: up to 4096 MDSs and up to 4096
184 <emphasis>Single MDS:</emphasis></para>
185 <para> 750 million files</para>
187 <emphasis>MDS count:</emphasis></para>
188 <para> 1 primary + 1 backup</para>
194 <emphasis role="bold">MDS Performance</emphasis></para>
197 <para> 35000/s create operations,</para>
198 <para> 100000/s metadata stat operations</para>
201 <para> 15000/s create operations,</para>
202 <para> 35000/s metadata stat operations</para>
208 <emphasis role="bold">File system Scalability</emphasis></para>
212 <emphasis>Single File:</emphasis></para>
213 <para>2.5 PB max file size</para>
215 <emphasis>Aggregate:</emphasis></para>
216 <para>512 PB space, 4 billion files</para>
220 <emphasis>Single File:</emphasis></para>
221 <para>multi-TB max file size</para>
223 <emphasis>Aggregate:</emphasis></para>
224 <para>10 PB space, 750 million files</para>
230 <para>Other Lustre features are:</para>
233 <para><emphasis role="bold">Performance-enhanced ext4 file system:</emphasis> The Lustre
234 file system uses an improved version of the ext4 journaling file system to store data
235 and metadata. This version, called <emphasis role="italic"
236 ><literal>ldiskfs</literal></emphasis>, has been enhanced to improve performance and
237 provide additional functionality needed by the Lustre file system.</para>
240 <para><emphasis role="bold">POSIX* compliance:</emphasis> The full POSIX test suite passes
241 in an identical manner to a local ext4 filesystem, with limited exceptions on Lustre
242 clients. In a cluster, most operations are atomic so that clients never see stale data
243 or metadata. The Lustre software supports mmap() file I/O.</para>
246 <para><emphasis role="bold">High-performance heterogeneous networking:</emphasis> The
247 Lustre software supports a variety of high performance, low latency networks and permits
248 Remote Direct Memory Access (RDMA) for Infiniband* (OFED) and other advanced networks
249 for fast and efficient network transport. Multiple RDMA networks can be bridged using
250 Lustre routing for maximum performance. The Lustre software also includes integrated
251 network diagnostics.</para>
254 <para><emphasis role="bold">High-availability:</emphasis> The Lustre file system supports
255 active/active failover using shared storage partitions for OSS targets (OSTs). Lustre
256 Release 2.3 and earlier releases offer active/passive failover using a shared storage
257 partition for the MDS target (MDT).</para>
258 <para condition="l24">With Lustre Release 2.4 or later servers and clients it is possible
259 to configure active/active failover of multiple MDTs. This allows application
260 transparent recovery. The Lustre file system can work with a variety of high
261 availability (HA) managers to allow automated failover and has no single point of
262 failure (NSPF). Multiple mount protection (MMP) provides integrated protection from
263 errors in highly-available systems that would otherwise cause file system
267 <para><emphasis role="bold">Security:</emphasis> By default TCP connections are only
268 allowed from privileged ports. UNIX group membership is verified on the MDS.</para>
271 <para><emphasis role="bold">Access control list (ACL), extended attributes:</emphasis> the
272 Lustre security model follows that of a UNIX file system, enhanced with POSIX ACLs.
273 Noteworthy additional features include root squash.</para>
276 <para><emphasis role="bold">Interoperability:</emphasis> The Lustre file system runs on a
277 variety of CPU architectures and mixed-endian clusters and is interoperable between
278 successive major Lustre software releases.</para>
281 <para><emphasis role="bold">Object-based architecture:</emphasis> Clients are isolated
282 from the on-disk file structure enabling upgrading of the storage architecture without
283 affecting the client.</para>
286 <para><emphasis role="bold">Byte-granular file and fine-grained metadata
287 locking:</emphasis> Many clients can read and modify the same file or directory
288 concurrently. The Lustre distributed lock manager (LDLM) ensures that files are coherent
289 between all clients and servers in the file system. The MDT LDLM manages locks on inode
290 permissions and pathnames. Each OST has its own LDLM for locks on file stripes stored
291 thereon, which scales the locking performance as the file system grows.</para>
294 <para><emphasis role="bold">Quotas:</emphasis> User and group quotas are available for a
295 Lustre file system.</para>
298 <para><emphasis role="bold">Capacity growth:</emphasis> The size of a Lustre file system
299 and aggregate cluster bandwidth can be increased without interruption by adding a new
300 OSS with OSTs to the cluster.</para>
303 <para><emphasis role="bold">Controlled striping:</emphasis> The layout of files across
304 OSTs can be configured on a per file, per directory, or per file system basis. This
305 allows file I/O to be tuned to specific application requirements within a single file
306 system. The Lustre file system uses RAID-0 striping and balances space usage across
310 <para><emphasis role="bold">Network data integrity protection:</emphasis> A checksum of
311 all data sent from the client to the OSS protects against corruption during data
315 <para><emphasis role="bold">MPI I/O:</emphasis> The Lustre architecture has a dedicated
316 MPI ADIO layer that optimizes parallel I/O to match the underlying file system
320 <para><emphasis role="bold">NFS and CIFS export:</emphasis> Lustre files can be re-exported using NFS (via Linux knfsd) or CIFS (via Samba) enabling them to be shared with non-Linux clients, such as Microsoft* Windows* and Apple* Mac OS X*.</para>
323 <para><emphasis role="bold">Disaster recovery tool:</emphasis> The Lustre file system
324 provides a distributed file system check (lfsck) that can restore consistency between
325 storage components in case of a major file system error. A Lustre file system can
326 operate even in the presence of file system inconsistencies, so lfsck is not required
327 before returning the file system to production.</para>
330 <para><emphasis role="bold">Performance monitoring:</emphasis> The Lustre file system
331 offers a variety of mechanisms to examine performance and tuning.</para>
334 <para><emphasis role="bold">Open source:</emphasis> The Lustre software is licensed under
335 the GPL 2.0 license for use with Linux.</para>
340 <section xml:id="understandinglustre.components">
342 <primary>Lustre</primary>
343 <secondary>components</secondary>
344 </indexterm>Lustre Components</title>
345 <para>An installation of the Lustre software includes a management server (MGS) and one or more
346 Lustre file systems interconnected with Lustre networking (LNET).</para>
347 <para>A basic configuration of Lustre components is shown in <xref
348 linkend="understandinglustre.fig.cluster"/>.</para>
350 <title xml:id="understandinglustre.fig.cluster">Lustre* components in a basic cluster </title>
353 <imagedata scalefit="1" width="100%" fileref="./figures/Basic_Cluster.png"/>
356 <phrase> Lustre* components in a basic cluster </phrase>
362 <primary>Lustre</primary>
363 <secondary>MGS</secondary>
364 </indexterm>Management Server (MGS)</title>
365 <para>The MGS stores configuration information for all the Lustre file systems in a cluster
366 and provides this information to other Lustre components. Each Lustre target contacts the
367 MGS to provide information, and Lustre clients contact the MGS to retrieve
369 <para>It is preferable that the MGS have its own storage space so that it can be managed
370 independently. However, the MGS can be co-located and share storage space with an MDS as
371 shown in <xref linkend="understandinglustre.fig.cluster"/>.</para>
374 <title>Lustre File System Components</title>
375 <para>Each Lustre file system consists of the following components:</para>
378 <para><emphasis role="bold">Metadata Server (MDS)</emphasis> - The MDS makes metadata
379 stored in one or more MDTs available to Lustre clients. Each MDS manages the names and
380 directories in the Lustre file system(s) and provides network request handling for one
381 or more local MDTs.</para>
384 <para><emphasis role="bold">Metadata Target (MDT</emphasis> ) - For Lustre Release 2.3 and
385 earlier, each file system has one MDT. The MDT stores metadata (such as filenames,
386 directories, permissions and file layout) on storage attached to an MDS. Each file
387 system has one MDT. An MDT on a shared storage target can be available to multiple MDSs,
388 although only one can access it at a time. If an active MDS fails, a standby MDS can
389 serve the MDT and make it available to clients. This is referred to as MDS
391 <para condition="l24">Since Lustre Release 2.4, multiple MDTs are supported. Each file
392 system has at least one MDT. An MDT on a shared storage target can be available via
393 multiple MDSs, although only one MDS can export the MDT to the clients at one time. Two
394 MDS machines share storage for two or more MDTs. After the failure of one MDS, the
395 remaining MDS begins serving the MDT(s) of the failed MDS.</para>
398 <para><emphasis role="bold">Object Storage Servers (OSS)</emphasis> : The OSS provides
399 file I/O service and network request handling for one or more local OSTs. Typically, an
400 OSS serves between two and eight OSTs, up to 16 TB each. A typical configuration is an
401 MDT on a dedicated node, two or more OSTs on each OSS node, and a client on each of a
402 large number of compute nodes.</para>
405 <para><emphasis role="bold">Object Storage Target (OST)</emphasis> : User file data is
406 stored in one or more objects, each object on a separate OST in a Lustre file system.
407 The number of objects per file is configurable by the user and can be tuned to optimize
408 performance for a given workload.</para>
411 <para><emphasis role="bold">Lustre clients</emphasis> : Lustre clients are computational,
412 visualization or desktop nodes that are running Lustre client software, allowing them to
413 mount the Lustre file system.</para>
416 <para>The Lustre client software provides an interface between the Linux virtual file system
417 and the Lustre servers. The client software includes a management client (MGC), a metadata
418 client (MDC), and multiple object storage clients (OSCs), one corresponding to each OST in
419 the file system.</para>
420 <para>A logical object volume (LOV) aggregates the OSCs to provide transparent access across
421 all the OSTs. Thus, a client with the Lustre file system mounted sees a single, coherent,
422 synchronized namespace. Several clients can write to different parts of the same file
423 simultaneously, while, at the same time, other clients can read from the file.</para>
424 <para><xref linkend="understandinglustre.tab.storagerequire"/> provides the requirements for
425 attached storage for each Lustre file system component and describes desirable
426 characteristics of the hardware used.</para>
428 <title xml:id="understandinglustre.tab.storagerequire"><indexterm>
429 <primary>Lustre</primary>
430 <secondary>requirements</secondary>
431 </indexterm>Storage and hardware requirements for Lustre* components</title>
433 <colspec colname="c1" colwidth="1*"/>
434 <colspec colname="c2" colwidth="3*"/>
435 <colspec colname="c3" colwidth="3*"/>
439 <para><emphasis role="bold"/></para>
442 <para><emphasis role="bold">Required attached storage</emphasis></para>
445 <para><emphasis role="bold">Desirable hardware characteristics</emphasis></para>
453 <emphasis role="bold">MDSs</emphasis></para>
456 <para> 1-2% of file system capacity</para>
459 <para> Adequate CPU power, plenty of memory, fast disk storage.</para>
465 <emphasis role="bold">OSSs</emphasis></para>
468 <para> 1-16 TB per OST, 1-8 OSTs per OSS</para>
471 <para> Good bus bandwidth. Recommended that storage be balanced evenly across
478 <emphasis role="bold">Clients</emphasis></para>
484 <para> Low latency, high bandwidth network.</para>
490 <para>For additional hardware requirements and considerations, see <xref
491 linkend="settinguplustresystem"/>.</para>
495 <primary>Lustre</primary>
496 <secondary>LNET</secondary>
497 </indexterm>Lustre Networking (LNET)</title>
498 <para>Lustre Networking (LNET) is a custom networking API that provides the communication
499 infrastructure that handles metadata and file I/O data for the Lustre file system servers
500 and clients. For more information about LNET, see <xref
501 linkend="understandinglustrenetworking"/>.</para>
505 <primary>Lustre</primary>
506 <secondary>cluster</secondary>
507 </indexterm>Lustre Cluster</title>
508 <para>At scale, the Lustre cluster can include hundreds of OSSs and thousands of clients (see
509 <xref linkend="understandinglustre.fig.lustrescale"/>). More than one type of network can
510 be used in a Lustre cluster. Shared storage between OSSs enables failover capability. For
511 more details about OSS failover, see <xref linkend="understandingfailover"/>.</para>
513 <title xml:id="understandinglustre.fig.lustrescale"><indexterm>
514 <primary>Lustre</primary>
515 <secondary>at scale</secondary>
516 </indexterm>Lustre* cluster at scale</title>
519 <imagedata scalefit="1" width="100%" fileref="./figures/Scaled_Cluster.png"/>
522 <phrase> Lustre* clustre at scale </phrase>
528 <section xml:id="understandinglustre.storageio">
530 <primary>Lustre</primary>
531 <secondary>storage</secondary>
534 <primary>Lustre</primary>
535 <secondary>I/O</secondary>
536 </indexterm> Lustre Storage and I/O</title>
537 <para>In a Lustre file system, a file stored on the MDT points to one or more objects associated
538 with a data file, as shown in <xref linkend="understandinglustre.fig.mdtost"/>. Each object
539 contains data and is stored on an OST. If the MDT file points to one object, all the file data
540 is stored in that object. If the file points to more than one object, the file data is
541 'striped' across the objects (using RAID 0) and each object is stored on a different
542 OST. (For more information about how striping is implemented in a Lustre file system, see
543 <xref linkend="dbdoclet.50438250_89922"/>)</para>
544 <para>In <xref linkend="understandinglustre.fig.mdtost"/>, each filename points to an inode. The
545 inode contains all of the file attributes, such as owner, access permissions, Lustre striping
546 layout, access time, and access control. Multiple filenames may point to the same
549 <title xml:id="understandinglustre.fig.mdtost">MDT file points to objects on OSTs containing
553 <imagedata scalefit="1" width="100%" fileref="./figures/Metadata_File.png"/>
556 <phrase> MDT file points to objects on OSTs containing file data </phrase>
560 <para>When a client opens a file, the <literal>fileopen</literal> operation transfers the file
561 layout from the MDS to the client. The client then uses this information to perform I/O on the
562 file, directly interacting with the OSS nodes where the objects are stored. This process is
563 illustrated in <xref linkend="understandinglustre.fig.fileio"/>.</para>
565 <title xml:id="understandinglustre.fig.fileio">File open and file I/O in Lustre*</title>
568 <imagedata scalefit="1" width="100%" fileref="./figures/File_Write.png"/>
571 <phrase> File open and file I/O in Lustre* </phrase>
575 <para>Each file on the MDT contains the layout of the associated data file, including the OST
576 number and object identifier. Clients request the file layout from the MDS and then perform
577 file I/O operations by communicating directly with the OSSs that manage that file data.</para>
578 <para>The available bandwidth of a Lustre file system is determined as follows:</para>
581 <para>The <emphasis>network bandwidth</emphasis> equals the aggregated bandwidth of the OSSs
582 to the targets.</para>
585 <para>The <emphasis>disk bandwidth</emphasis> equals the sum of the disk bandwidths of the
586 storage targets (OSTs) up to the limit of the network bandwidth.</para>
589 <para>The <emphasis>aggregate bandwidth</emphasis> equals the minimum of the disk bandwidth
590 and the network bandwidth.</para>
593 <para>The <emphasis>available file system space</emphasis> equals the sum of the available
594 space of all the OSTs.</para>
597 <section xml:id="dbdoclet.50438250_89922">
600 <primary>Lustre</primary>
601 <secondary>striping</secondary>
604 <primary>striping</primary>
605 <secondary>overview</secondary>
606 </indexterm> Lustre File System and Striping</title>
607 <para>One of the main factors leading to the high performance of Lustre file systems is the
608 ability to stripe data across multiple OSTs in a round-robin fashion. Users can optionally
609 configure for each file the number of stripes, stripe size, and OSTs that are used.</para>
610 <para>Striping can be used to improve performance when the aggregate bandwidth to a single
611 file exceeds the bandwidth of a single OST. The ability to stripe is also useful when a
612 single OST does not have enough free space to hold an entire file. For more information
613 about benefits and drawbacks of file striping, see <xref linkend="dbdoclet.50438209_48033"
615 <para>Striping allows segments or 'chunks' of data in a file to be stored on
616 different OSTs, as shown in <xref linkend="understandinglustre.fig.filestripe"/>. In the
617 Lustre file system, a RAID 0 pattern is used in which data is "striped" across a
618 certain number of objects. The number of objects in a single file is called the
619 <literal>stripe_count</literal>.</para>
620 <para>Each object contains a chunk of data from the file. When the chunk of data being written
621 to a particular object exceeds the <literal>stripe_size</literal>, the next chunk of data in
622 the file is stored on the next object.</para>
623 <para>Default values for <literal>stripe_count</literal> and <literal>stripe_size</literal>
624 are set for the file system. The default value for <literal>stripe_count</literal> is 1
625 stripe for file and the default value for <literal>stripe_size</literal> is 1MB. The user
626 may change these values on a per directory or per file basis. For more details, see <xref
627 linkend="dbdoclet.50438209_78664"/>.</para>
628 <para><xref linkend="understandinglustre.fig.filestripe"/>, the <literal>stripe_size</literal>
629 for File C is larger than the <literal>stripe_size</literal> for File A, allowing more data
630 to be stored in a single stripe for File C. The <literal>stripe_count</literal> for File A
631 is 3, resulting in data striped across three objects, while the
632 <literal>stripe_count</literal> for File B and File C is 1.</para>
633 <para>No space is reserved on the OST for unwritten data. File A in <xref
634 linkend="understandinglustre.fig.filestripe"/>.</para>
636 <title xml:id="understandinglustre.fig.filestripe">File striping on a Lustre* file
640 <imagedata scalefit="1" width="100%" fileref="./figures/File_Striping.png"/>
643 <phrase>File striping pattern across three OSTs for three different data files. The file
644 is sparse and missing chunk 6. </phrase>
648 <para>The maximum file size is not limited by the size of a single target. In a Lustre file
649 system, files can be striped across multiple objects (up to 2000), and each object can be
650 up to 16 TB in size with ldiskfs. This leads to a maximum file size of 31.25 PB. (Note that
651 a Lustre file system can support files up to 2^64 bytes depending on the backing storage
652 used by OSTs.)</para>
654 <para>Versions of the Lustre software prior to Release 2.2 limited the maximum stripe count
655 for a single file to 160 OSTs.</para>
657 <para>Although a single file can only be striped over 2000 objects, Lustre file systems can
658 have thousands of OSTs. The I/O bandwidth to access a single file is the aggregated I/O
659 bandwidth to the objects in a file, which can be as much as a bandwidth of up to 2000
660 servers. On systems with more than 2000 OSTs, clients can do I/O using multiple files to
661 utilize the full file system bandwidth.</para>
662 <para>For more information about striping, see <xref linkend="managingstripingfreespace"