1 .TH lfs-df 1 "2016 Dec 7" Lustre "user utilities"
3 lfs df \- report Lustre filesystem disk usage
5 .BR "lfs df" " [" -i "] [" -h "] [" --lazy "] [" --pool | -p
6 .IR <fsname> [. <pool> ]]
11 displays filesystem usage information by default for each Lustre
12 filesystem currently mounted on that node, or for the filesystem
15 if given. It displays the current usage and totals for each MDT and
16 OST separately, as well as a per-filesystem summary that matches
18 output for each filesystem.
24 usage of the OSTs (the MDT space usage is shown only for reference). With
28 (object) usage for each target and in the summary. For ZFS-backed
31 count accurately reflects the number of in-use objects on each target,
38 based on the current average (mean) space used per object on each target,
39 and may fluctuate over time to reflect the current usage pattern of
40 the target. The estimate becomes more accurate as the target becomes
41 more full, assuming the usage pattern is consistent.
44 may also report additional target status as the last column in the
45 display, if there are issues with that target. States include:
49 The target has a failed drive in the RAID device, or is undergoing
50 RAID reconstruction. This state is marked on the server automatically
53 or a (user-supplied) script that monitors the target device and sets
54 .B lctl set_param obdfilter.\fI<target>\fB.degraded=1
55 on the OST. This target will be avoided for new allocations, but
56 will still be used for existing files located there or if there are
57 not enough non-degraded OSTs to make up a widely-striped file.
60 The target filesystem is marked read-only due to filesystem
61 corruption detected by ldiskfs or ZFS. No modifications are
62 allowed on this OST, and it needs to have
65 .BR zpool (8) " scrub"
66 run to repair the underlying filesystem.
69 The target filesystem has less than the minimum required free space and
70 will not be used for new object allocations until it has more free space.
73 The target filesystem has less than the minimum required free inodes and
74 will not be used for new object allocations until it has more free inodes.
77 The various options supported by
79 are listed and explained below:
81 .BR -h ", " --human-readable
82 Print output in a human readable format (e.g. 16.3T, 4.25P).
83 Suffixes are SI base-2 units (i.e. 1 GiB = 1024 MiB).
86 Print information about the inode usage and totals for the MDTs and
87 OSTs rather than space usage. Note that the
93 counts typically reflect the
95 of values from the MDTs. If the total number of objects available
96 on the OSTs is smaller (factoring in the filesystem default
98 as one OST object is used for each stripe in a file)
103 values will be reduced to reflect the
105 number of files that could potentially be created with the default
106 stripe count. The actual total number of files that can be created
110 Do not attempt to contact any OST or MDT not currently connected to
111 the client. This avoids blocking the
113 output if a target is offline or unreachable, and only returns the
114 space on OSTs that can currently be accessed.
116 .BR -p ", " --pool= [ \fIfsname\fR .] \fIpool\fR
117 Limit the usage to report only MDTs and OSTs that are in the specified
119 If multiple filesystems are mounted, list OSTs in
121 for every filesystem, or limit the display to only a pool for a
122 specific filesystem if
124 is given. Specifying both the fsname and pool like:
126 .BI "lfs df --pool=" fsname.pool
128 is equivalent to specifying the mountpoint for the given
131 .BI "lfs df --pool=" "pool /mnt/fsname"
133 .BR -v ", " --verbose
134 Show deactivated MDTs and OSTs in the listing. By default, any
135 MDTs and OSTs that are deactivated by the administrator are not shown.
136 However, targets that are only temporarily inaccessible are still shown.
139 .B $ lfs df -h /mnt/testfs
140 Lists space usage per OST and MDT for the
142 filesystem in human readable format.
145 UUID bytes Used Avail Use% Mounted on
147 testfs-MDT0000_UUID 13.0G 1.2G 11.0G 10% /testfs[MDT:0]
149 testfs-OST0000_UUID 3.6T 2.9T 585.7G 84% /testfs[OST:0]
151 testfs-OST0001_UUID 3.6T 3.1T 472.5G 87% /testfs[OST:1] D
153 testfs-OST0002_UUID 3.6T 3.0T 570.3G 84% /testfs[OST:2] DR
155 OST0003 : inactive device
157 testfs-OST0006_UUID 5.4T 4.9T 417.8G 92% /testfs[OST:3]
160 filesystem_summary: 16.2T 13.8T 2.0T 88% /testfs
164 The above example output shows that
166 is currently temporarily inactive or offline, while
170 are not shown at all, either because they are marked permanently offline
171 by the administrator (via
172 .BR "lctl set_param -P osc.testfs-OST000[45].active=0" )
173 or they were never added to the filesystem. The
177 targets are currently marked
179 (perhaps they both share the same underlying storage controller),
184 after detecting non-recoverable corruption in the filesystem.
187 List inode usage per OST and MDT for all mounted Lustre filesystems.
189 UUID Inodes IUsed IFree IUse% Mounted on
191 testfs-MDT0000_UUID 932160 884609 47551 95% /testfs[MDT:0]
193 testfs-OST0000_UUID 267456 179649 87807 67% /testfs[OST:0]
195 testfs-OST0001_UUID 268864 173466 95398 64% /testfs[OST:1] D
197 testfs-OST0002_UUID 267456 169575 97881 63% /testfs[OST:2] DR
199 OST0003 : inactive device
201 testfs-OST0006_UUID 426144 377448 48696 88% /testfs[OST:3]
204 filesystem_summary: 932160 884609 47551 95% /testfs
208 .B $ lfs df --pool ssd /mnt/testfs
209 List space usage for only the
215 .B $ lfs df -v /mnt/testfs
216 List all MDTs and OSTs for the
218 filesystem, even if not currently connected.