From 3b1ae96c8db70a5999e0869bcd5d9b08307952b3 Mon Sep 17 00:00:00 2001 From: Andreas Dilger Date: Thu, 16 Jan 2025 01:12:30 -0700 Subject: [PATCH] LUDOC-11 misc: update limits to more recent values Increase limits based on more recent system deployments. Signed-off-by: Andreas Dilger Change-Id: Ifead5eb6bd84ae67abd0918c27d7d63ea18cdfef Reviewed-on: https://review.whamcloud.com/c/doc/manual/+/57803 Tested-by: jenkins Reviewed-by: Peter Jones --- UnderstandingLustre.xml | 46 +++++++++++++++++++++++----------------------- 1 file changed, 23 insertions(+), 23 deletions(-) diff --git a/UnderstandingLustre.xml b/UnderstandingLustre.xml index e8c2137..5303120 100644 --- a/UnderstandingLustre.xml +++ b/UnderstandingLustre.xml @@ -69,12 +69,12 @@ . A Lustre installation can be scaled up or down with respect to the - number of client nodes, disk storage and bandwidth. Scalability and - performance are dependent on available disk and network bandwidth and the - processing power of the servers in the system. A Lustre file system can - be deployed in a wide variety of configurations that can be scaled well - beyond the size and performance observed in production systems to - date. + number of client nodes, storage capacity and bandwidth. + Scalability and performance are dependent on available storage and + network bandwidth and the processing power of the servers in the system. + A Lustre filesystem can be deployed in a wide variety of configurations + that can be scaled well beyond the size and performance observed in + production systems to date. shows some of the scalability and performance characteristics of a Lustre file system. @@ -113,10 +113,10 @@ - 100-100000 + 10-100000 - 50000+ clients, many in the 10000 to 20000 range + 50000+ clients, several in the 10000 to 20000 range @@ -133,17 +133,17 @@ Aggregate: - 50 TB/sec I/O, 50M IOPS + 50 TB/sec I/O, 225M IOPS Single client: - 15 GB/sec I/O (HDR IB), 50000 IOPS + 80 GB/sec I/O (8x 100Gbps IB), 100k IOPS Aggregate: - 10 TB/sec I/O, 10M IOPS + 20 TB/sec I/O, 40M IOPS @@ -160,7 +160,7 @@ Single OST: - 500M objects, 1024TiB per OST + 1000M objects, 4096TiB per OST OSS count: @@ -170,16 +170,16 @@ Single OSS: - 4 OSTs per OSS + 8 OSTs per OSS Single OST: - 1024TiB OSTs + 2048TiB OSTs OSS count: 450 OSSs with 900 750TiB HDD OSTs + 450 25TiB NVMe OSTs - 1024 OSSs with 1024 72TiB OSTs + 576 OSSs with 576 69TiB NVMe OSTs @@ -192,21 +192,21 @@ Single OSS: - 15 GB/sec, 1.5M IOPS + 50 GB/sec, 1M IOPS Aggregate: - 50 TB/sec, 50M IOPS + 50 TB/sec, 225M IOPS Single OSS: - 10 GB/sec, 1.5M IOPS + 15/25 GB/sec write/read, 750k IOPS Aggregate: - 20 TB/sec, 20M IOPS + 20 TB/sec, 40M IOPS @@ -238,8 +238,8 @@ MDS count: - 40 MDS with 40 4TiB MDTs in production - 256 MDS with 256 64GiB MDTs in testing + 56 MDS with 56 4TiB MDTs + 90 billion files @@ -272,7 +272,7 @@ Aggregate: - 512 PiB space, 1 trillion files + 2048 PiB space, 1 trillion files @@ -282,7 +282,7 @@ Aggregate: - 700 PiB space, 25 billion files + 700 PiB space, 90 billion files -- 1.8.3.1