X-Git-Url: https://git.whamcloud.com/?a=blobdiff_plain;f=UnderstandingLustre.xml;h=4f2ef12ec64baab0888067984ad938c6919185dd;hb=d42b0f0c530397a9d3f1dd263d21ed3cfed2566d;hp=155fcd3097211791c1ed8b566732d3e4ca0bd4f9;hpb=402a148090756e3d51581d7ffffa31e41d237087;p=doc%2Fmanual.git diff --git a/UnderstandingLustre.xml b/UnderstandingLustre.xml index 155fcd3..4f2ef12 100644 --- a/UnderstandingLustre.xml +++ b/UnderstandingLustre.xml @@ -127,6 +127,7 @@ 4 billion files MDS count: 1 primary + 1 backup + Since Lustre 2.4: up to 4096 MDSs and up to 4096 MDTs. Single MDS: @@ -180,7 +181,7 @@ High-performance heterogeneous networking: Lustre supports a variety of high performance, low latency networks and permits Remote Direct Memory Access (RDMA) for Infiniband (OFED) and other advanced networks for fast and efficient network transport. Multiple RDMA networks can be bridged using Lustre routing for maximum performance. Lustre also provides integrated network diagnostics. - High-availability: Lustre offers active/active failover using shared storage partitions for OSS targets (OSTs) and active/passive failover using a shared storage partition for the MDS target (MDT). This allows application transparent recovery. Lustre can work with a variety of high availability (HA) managers to allow automated failover and has no single point of failure (NSPF). Multiple mount protection (MMP) provides integrated protection from errors in highly-available systems that would otherwise cause file system corruption. + High-availability: Lustre offers active/active failover using shared storage partitions for OSS targets (OSTs). Lustre 2.3 and earlier offers active/passive failover using a shared storage partition for the MDS target (MDT).With Lustre 2.4 or later servers and clients it is possible to configure active/active failover of multiple MDTsThis allows application transparent recovery. Lustre can work with a variety of high availability (HA) managers to allow automated failover and has no single point of failure (NSPF). Multiple mount protection (MMP) provides integrated protection from errors in highly-available systems that would otherwise cause file system corruption. Security: By default TCP connections are only allowed from privileged ports. Unix group membership is verified on the MDS. @@ -255,7 +256,8 @@ Metadata Server (MDS) - The MDS makes metadata stored in one or more MDTs available to Lustre clients. Each MDS manages the names and directories in the Lustre file system(s) and provides network request handling for one or more local MDTs. - Metadata Target (MDT ) - The MDT stores metadata (such as filenames, directories, permissions and file layout) on storage attached to an MDS. Each file system has one MDT. An MDT on a shared storage target can be available to multiple MDSs, although only one can access it at a time. If an active MDS fails, a standby MDS can serve the MDT and make it available to clients. This is referred to as MDS failover. + Metadata Target (MDT ) - For Lustre 2.3 and earlier, each filesystem has one MDT. The MDT stores metadata (such as filenames, directories, permissions and file layout) on storage attached to an MDS. Each file system has one MDT. An MDT on a shared storage target can be available to multiple MDSs, although only one can access it at a time. If an active MDS fails, a standby MDS can serve the MDT and make it available to clients. This is referred to as MDS failover. + Since Lustre 2.4, multiple MDTs are supported. Each filesystem has at least one MDT. An MDT on shared storage target can be available via multiple MDSs, although only one MDS can export the MDT to the clients at one time. Two MDS machines share storage for two or more MDTs. After the failure of one MDS, the remaining MDS begins serving the MDT(s) of the failed MDS. Object Storage Servers (OSS) : The OSS provides file I/O service, and network request handling for one or more local OSTs. Typically, an OSS serves between 2 and 8 OSTs, up to 16 TB each. A typical configuration is an MDT on a dedicated node, two or more OSTs on each OSS node, and a client on each of a large number of compute nodes. @@ -330,7 +332,7 @@
<indexterm><primary>Lustre</primary><secondary>LNET</secondary></indexterm>Lustre Networking (LNET) - Lustre Networking (LNET) is a custom networking API that provides the communication infrastructure that handles metadata and file I/O data for the Lustre file system servers and clients. For more information about LNET, see something . + Lustre Networking (LNET) is a custom networking API that provides the communication infrastructure that handles metadata and file I/O data for the Lustre file system servers and clients. For more information about LNET, see .
<indexterm><primary>Lustre</primary><secondary>cluster</secondary></indexterm>Lustre Cluster