1 <?xml version='1.0' encoding='utf-8'?>
2 <chapter xmlns="http://docbook.org/ns/docbook"
3 xmlns:xl="http://www.w3.org/1999/xlink" version="5.0" xml:lang="en-US"
4 xml:id="managingfilesystemio">
5 <title xml:id="managingfilesystemio.title">Managing the File System and
7 <section xml:id="dbdoclet.50438211_17536">
10 <primary>I/O</primary>
13 <primary>I/O</primary>
14 <secondary>full OSTs</secondary>
15 </indexterm>Handling Full OSTs</title>
16 <para>Sometimes a Lustre file system becomes unbalanced, often due to
17 incorrectly-specified stripe settings, or when very large files are created
18 that are not striped over all of the OSTs. Lustre will automatically avoid
19 allocating new files on OSTs that are full. If an OST is completely full and
20 more data is written to files already located on that OST, an error occurs.
21 The procedures below describe how to handle a full OST.</para>
22 <para>The MDS will normally handle space balancing automatically at file
23 creation time, and this procedure is normally not needed, but manual data
24 migration may be desirable in some cases (e.g. creating very large files
25 that would consume more than the total free space of the full OSTs).</para>
29 <primary>I/O</primary>
30 <secondary>OST space usage</secondary>
31 </indexterm>Checking OST Space Usage</title>
32 <para>The example below shows an unbalanced file system:</para>
35 UUID bytes Used Available \
37 testfs-MDT0000_UUID 4.4G 214.5M 3.9G \
39 testfs-OST0000_UUID 2.0G 751.3M 1.1G \
40 37% /mnt/testfs[OST:0]
41 testfs-OST0001_UUID 2.0G 755.3M 1.1G \
42 37% /mnt/testfs[OST:1]
43 testfs-OST0002_UUID 2.0G 1.7G 155.1M \
44 86% /mnt/testfs[OST:2] ****
45 testfs-OST0003_UUID 2.0G 751.3M 1.1G \
46 37% /mnt/testfs[OST:3]
47 testfs-OST0004_UUID 2.0G 747.3M 1.1G \
48 37% /mnt/testfs[OST:4]
49 testfs-OST0005_UUID 2.0G 743.3M 1.1G \
50 36% /mnt/testfs[OST:5]
52 filesystem summary: 11.8G 5.4G 5.8G \
55 <para>In this case, OST0002 is almost full and when an attempt is made to
56 write additional information to the file system (even with uniform
57 striping over all the OSTs), the write command fails as follows:</para>
59 client# lfs setstripe /mnt/testfs 4M 0 -1
60 client# dd if=/dev/zero of=/mnt/testfs/test_3 bs=10M count=100
61 dd: writing '/mnt/testfs/test_3': No space left on device
64 1017192448 bytes (1.0 GB) copied, 23.2411 seconds, 43.8 MB/s
70 <primary>I/O</primary>
71 <secondary>disabling OST creates</secondary>
72 </indexterm>Disabling creates on a Full OST</title>
73 <para>To avoid running out of space in the file system, if the OST usage
74 is imbalanced and one or more OSTs are close to being full while there
75 are others that have a lot of space, the MDS will typically avoid file
76 creation on the full OST(s) automatically. The full OSTs may optionally
77 be deactivated manually on the MDS to ensure the MDS will not allocate
78 new objects there.</para>
81 <para>Log into the MDS server and use the <literal>lctl</literal>
82 command to stop new object creation on the full OST(s):
85 mds# lctl set_param osp.<replaceable>fsname</replaceable>-OST<replaceable>nnnn</replaceable>*.max_create_count=0
89 <para>When new files are created in the file system, they will only use
90 the remaining OSTs. Either manual space rebalancing can be done by
91 migrating data to other OSTs, as shown in the next section, or normal
92 file deletion and creation can passively rebalance the space usage.</para>
97 <primary>I/O</primary>
98 <secondary>migrating data</secondary>
101 <primary>maintenance</primary>
102 <secondary>full OSTs</secondary>
103 </indexterm>Migrating Data within a File System</title>
105 <para>If there is a need to move the file data from the current
106 OST(s) to new OST(s), the data must be migrated (copied)
107 to the new location. The simplest way to do this is to use the
108 <literal>lfs_migrate</literal> command, as described in
109 <xref linkend="dbdoclet.adding_new_ost" />.</para>
114 <primary>I/O</primary>
115 <secondary>bringing OST online</secondary>
118 <primary>maintenance</primary>
119 <secondary>bringing OST online</secondary>
120 </indexterm>Returning an Inactive OST Back Online</title>
121 <para>Once the full OST(s) no longer are severely imbalanced, due
122 to either active or passive data redistribution, they should be
123 reactivated so they will again have new files allocated on them.</para>
125 [mds]# lctl set_param osp.testfs-OST0002.max_create_count=20000
131 <primary>migrating metadata</primary>
132 </indexterm>Migrating Directories to a new MDT</title>
133 <para condition='l28'>Lustre software version 2.8 includes a feature
134 to migrate metadata (directories and inodes therein) between MDTs.
135 This migration can only be performed on whole directories. For example,
136 to migrate the contents of the <literal>/testfs/testremote</literal>
137 directory from the MDT it currently resides on to MDT0000, the
138 sequence of commands is as follows:</para>
140 lfs getdirstripe -M ./testremote <lineannotation>which MDT is dir on?</lineannotation>
142 $ for i in $(seq 3); do touch ./testremote/${i}.txt; done <lineannotation>create test files</lineannotation>
143 $ for i in $(seq 3); do lfs getstripe -M ./testremote/${i}.txt; done <lineannotation>check files are on MDT 1</lineannotation>
147 $ lfs migrate -m 0 ./testremote <lineannotation>migrate testremote to MDT 0</lineannotation>
148 $ lfs getdirstripe -M ./testremote <lineannotation>which MDT is dir on now?</lineannotation>
150 $ for i in $(seq 3); do lfs getstripe -M ./testremote/${i}.txt; done <lineannotation>check files are on MDT 0 too</lineannotation>
154 <para>For more information, see <literal>man lfs-migrate</literal></para>
155 <warning><para>Currently, only whole directories can be migrated
156 between MDTs. During migration each file receives a new identifier
157 (FID). As a consequence, the file will report a new inode number. Some
158 system tools (for example, backup and archiving tools) may consider
159 the migrated files to be new, even though the contents are unchanged.
163 <section xml:id="dbdoclet.50438211_75549">
166 <primary>I/O</primary>
167 <secondary>pools</secondary>
170 <primary>maintenance</primary>
171 <secondary>pools</secondary>
174 <primary>pools</primary>
175 </indexterm>Creating and Managing OST Pools</title>
176 <para>The OST pools feature enables users to group OSTs together to make
177 object placement more flexible. A 'pool' is the name associated with an
178 arbitrary subset of OSTs in a Lustre cluster.</para>
179 <para>OST pools follow these rules:</para>
182 <para>An OST can be a member of multiple pools.</para>
185 <para>No ordering of OSTs in a pool is defined or implied.</para>
188 <para>Stripe allocation within a pool follows the same rules as the
189 normal stripe allocator.</para>
192 <para>OST membership in a pool is flexible, and can change over
196 <para>When an OST pool is defined, it can be used to allocate files. When
197 file or directory striping is set to a pool, only OSTs in the pool are
198 candidates for striping. If a stripe_index is specified which refers to an
199 OST that is not a member of the pool, an error is returned.</para>
200 <para>OST pools are used only at file creation. If the definition of a pool
201 changes (an OST is added or removed or the pool is destroyed),
202 already-created files are not affected.</para>
205 <literal>EINVAL</literal>) results if you create a file using an empty
209 <para>If a directory has pool striping set and the pool is subsequently
210 removed, the new files created in this directory have the (non-pool)
211 default striping pattern for that directory applied and no error is
215 <title>Working with OST Pools</title>
216 <para>OST pools are defined in the configuration log on the MGS. Use the
217 lctl command to:</para>
220 <para>Create/destroy a pool</para>
223 <para>Add/remove OSTs in a pool</para>
226 <para>List pools and OSTs in a specific pool</para>
229 <para>The lctl command MUST be run on the MGS. Another requirement for
230 managing OST pools is to either have the MDT and MGS on the same node or
231 have a Lustre client mounted on the MGS node, if it is separate from the
232 MDS. This is needed to validate the pool commands being run are
236 <literal>writeconf</literal> command on the MDS erases all pools
237 information (as well as any other parameters set using
238 <literal>lctl conf_param</literal>). We recommend that the pools
240 <literal>conf_param</literal> settings) be executed using a script, so
241 they can be reproduced easily after a
242 <literal>writeconf</literal> is performed.</para>
244 <para>To create a new pool, run:</para>
247 <replaceable>fsname</replaceable>.
248 <replaceable>poolname</replaceable>
251 <para>The pool name is an ASCII string up to 15 characters.</para>
253 <para>To add the named OST to a pool, run:</para>
256 <replaceable>fsname</replaceable>.
257 <replaceable>poolname</replaceable>
258 <replaceable>ost_list</replaceable>
265 <replaceable>ost_list</replaceable>is
266 <replaceable>fsname</replaceable>-OST
267 <replaceable>index_range</replaceable></literal>
273 <replaceable>index_range</replaceable>is
274 <replaceable>ost_index_start</replaceable>-
275 <replaceable>ost_index_end[,index_range]</replaceable></literal> or
277 <replaceable>ost_index_start</replaceable>-
278 <replaceable>ost_index_end/step</replaceable></literal></para>
283 <replaceable>fsname</replaceable>
284 </literal> and/or ending
285 <literal>_UUID</literal> are missing, they are automatically added.</para>
286 <para>For example, to add even-numbered OSTs to
287 <literal>pool1</literal> on file system
288 <literal>testfs</literal>, run a single command (
289 <literal>pool_add</literal>) to add many OSTs to the pool at one
293 lctl pool_add testfs.pool1 OST[0-10/2]
297 <para>Each time an OST is added to a pool, a new
298 <literal>llog</literal> configuration record is created. For
299 convenience, you can run a single command.</para>
301 <para>To remove a named OST from a pool, run:</para>
303 mgs# lctl pool_remove
304 <replaceable>fsname</replaceable>.
305 <replaceable>poolname</replaceable>
306 <replaceable>ost_list</replaceable>
308 <para>To destroy a pool, run:</para>
310 mgs# lctl pool_destroy
311 <replaceable>fsname</replaceable>.
312 <replaceable>poolname</replaceable>
315 <para>All OSTs must be removed from a pool before it can be
318 <para>To list pools in the named file system, run:</para>
321 <replaceable>fsname|pathname</replaceable>
323 <para>To list OSTs in a named pool, run:</para>
326 <replaceable>fsname</replaceable>.
327 <replaceable>poolname</replaceable>
330 <title>Using the lfs Command with OST Pools</title>
331 <para>Several lfs commands can be run with OST pools. Use the
332 <literal>lfs setstripe</literal> command to associate a directory with
333 an OST pool. This causes all new regular files and directories in the
334 directory to be created in the pool. The lfs command can be used to
335 list pools in a file system and OSTs in a named pool.</para>
336 <para>To associate a directory with a pool, so all new files and
337 directories will be created in the pool, run:</para>
339 client# lfs setstripe --pool|-p pool_name
340 <replaceable>filename|dirname</replaceable>
342 <para>To set striping patterns, run:</para>
344 client# lfs setstripe [--size|-s stripe_size] [--offset|-o start_ost]
345 [--count|-c stripe_count] [--pool|-p pool_name]
347 <replaceable>dir|filename</replaceable>
350 <para>If you specify striping with an invalid pool name, because the
351 pool does not exist or the pool name was mistyped,
352 <literal>lfs setstripe</literal> returns an error. Run
353 <literal>lfs pool_list</literal> to make sure the pool exists and the
354 pool name is entered correctly.</para>
358 <literal>--pool</literal> option for lfs setstripe is compatible with
359 other modifiers. For example, you can set striping on a directory to
360 use an explicit starting index.</para>
367 <primary>pools</primary>
368 <secondary>usage tips</secondary>
369 </indexterm>Tips for Using OST Pools</title>
370 <para>Here are several suggestions for using OST pools.</para>
373 <para>A directory or file can be given an extended attribute (EA),
374 that restricts striping to a pool.</para>
377 <para>Pools can be used to group OSTs with the same technology or
378 performance (slower or faster), or that are preferred for certain
379 jobs. Examples are SATA OSTs versus SAS OSTs or remote OSTs versus
383 <para>A file created in an OST pool tracks the pool by keeping the
384 pool name in the file LOV EA.</para>
389 <section xml:id="dbdoclet.50438211_11204">
392 <primary>I/O</primary>
393 <secondary>adding an OST</secondary>
394 </indexterm>Adding an OST to a Lustre File System</title>
395 <para>To add an OST to existing Lustre file system:</para>
398 <para>Add a new OST by passing on the following commands, run:</para>
400 oss# mkfs.lustre --fsname=testfs --mgsnode=mds16@tcp0 --ost --index=12 /dev/sda
401 oss# mkdir -p /mnt/testfs/ost12
402 oss# mount -t lustre /dev/sda /mnt/testfs/ost12
406 <para>Migrate the data (possibly).</para>
407 <para>The file system is quite unbalanced when new empty OSTs are
408 added. New file creations are automatically balanced. If this is a
409 scratch file system or files are pruned at a regular interval, then no
410 further work may be needed. Files existing prior to the expansion can
411 be rebalanced with an in-place copy, which can be done with a simple
413 <para>The basic method is to copy existing files to a temporary file,
414 then move the temp file over the old one. This should not be attempted
415 with files which are currently being written to by users or
416 applications. This operation redistributes the stripes over the entire
418 <para>A very clever migration script would do the following:</para>
421 <para>Examine the current distribution of data.</para>
424 <para>Calculate how much data should move from each full OST to the
428 <para>Search for files on a given full OST (using
429 <literal>lfs getstripe</literal>).</para>
432 <para>Force the new destination OST (using
433 <literal>lfs setstripe</literal>).</para>
436 <para>Copy only enough files to address the imbalance.</para>
441 <para>If a Lustre file system administrator wants to explore this approach
442 further, per-OST disk-usage statistics can be found under
443 <literal>/proc/fs/lustre/osc/*/rpc_stats</literal></para>
445 <section xml:id="dbdoclet.50438211_80295">
448 <primary>I/O</primary>
449 <secondary>direct</secondary>
450 </indexterm>Performing Direct I/O</title>
451 <para>The Lustre software supports the
452 <literal>O_DIRECT</literal> flag to open.</para>
453 <para>Applications using the
454 <literal>read()</literal> and
455 <literal>write()</literal> calls must supply buffers aligned on a page
456 boundary (usually 4 K). If the alignment is not correct, the call returns
457 <literal>-EINVAL</literal>. Direct I/O may help performance in cases where
458 the client is doing a large amount of I/O and is CPU-bound (CPU utilization
461 <title>Making File System Objects Immutable</title>
462 <para>An immutable file or directory is one that cannot be modified,
463 renamed or removed. To do this:</para>
466 <replaceable>file</replaceable>
468 <para>To remove this flag, use
469 <literal>chattr -i</literal></para>
472 <section xml:id="dbdoclet.50438211_61024">
473 <title>Other I/O Options</title>
474 <para>This section describes other I/O options, including checksums, and
475 the ptlrpcd thread pool.</para>
477 <title>Lustre Checksums</title>
478 <para>To guard against network data corruption, a Lustre client can
479 perform two types of data checksums: in-memory (for data in client
480 memory) and wire (for data sent over the network). For each checksum
481 type, a 32-bit checksum of the data read or written on both the client
482 and server is computed, to ensure that the data has not been corrupted in
483 transit over the network. The
484 <literal>ldiskfs</literal> backing file system does NOT do any persistent
485 checksumming, so it does not detect corruption of data in the OST file
487 <para>The checksumming feature is enabled, by default, on individual
488 client nodes. If the client or OST detects a checksum mismatch, then an
489 error is logged in the syslog of the form:</para>
491 LustreError: BAD WRITE CHECKSUM: changed in transit before arrival at OST: \
492 from 192.168.1.1@tcp inum 8991479/2386814769 object 1127239/0 extent [10240\
495 <para>If this happens, the client will re-read or re-write the affected
496 data up to five times to get a good copy of the data over the network. If
497 it is still not possible, then an I/O error is returned to the
499 <para>To enable both types of checksums (in-memory and wire), run:</para>
501 lctl set_param llite.*.checksum_pages=1
503 <para>To disable both types of checksums (in-memory and wire),
506 lctl set_param llite.*.checksum_pages=0
508 <para>To check the status of a wire checksum, run:</para>
510 lctl get_param osc.*.checksums
513 <title>Changing Checksum Algorithms</title>
514 <para>By default, the Lustre software uses the adler32 checksum
515 algorithm, because it is robust and has a lower impact on performance
516 than crc32. The Lustre file system administrator can change the
517 checksum algorithm via
518 <literal>lctl get_param</literal>, depending on what is supported in
520 <para>To check which checksum algorithm is being used by the Lustre
521 software, run:</para>
523 $ lctl get_param osc.*.checksum_type
525 <para>To change the wire checksum algorithm, run:</para>
527 $ lctl set_param osc.*.checksum_type=
528 <replaceable>algorithm</replaceable>
531 <para>The in-memory checksum always uses the adler32 algorithm, if
532 available, and only falls back to crc32 if adler32 cannot be
535 <para>In the following example, the
536 <literal>lctl get_param</literal> command is used to determine that the
537 Lustre software is using the adler32 checksum algorithm. Then the
538 <literal>lctl set_param</literal> command is used to change the checksum
539 algorithm to crc32. A second
540 <literal>lctl get_param</literal> command confirms that the crc32
541 checksum algorithm is now in use.</para>
543 $ lctl get_param osc.*.checksum_type
544 osc.testfs-OST0000-osc-ffff81012b2c48e0.checksum_type=crc32 [adler]
545 $ lctl set_param osc.*.checksum_type=crc32
546 osc.testfs-OST0000-osc-ffff81012b2c48e0.checksum_type=crc32
547 $ lctl get_param osc.*.checksum_type
548 osc.testfs-OST0000-osc-ffff81012b2c48e0.checksum_type=[crc32] adler
553 <title>Ptlrpc Thread Pool</title>
554 <para>Releases prior to Lustre software release 2.2 used two portal RPC
555 daemons for each client/server pair. One daemon handled all synchronous
556 IO requests, and the second daemon handled all asynchronous (non-IO)
557 RPCs. The increasing use of large SMP nodes for Lustre servers exposed
558 some scaling issues. The lack of threads for large SMP nodes resulted in
559 cases where a single CPU would be 100% utilized and other CPUs would be
560 relativity idle. This is especially noticeable when a single client
561 traverses a large directory.</para>
562 <para>Lustre software release 2.2.x implements a ptlrpc thread pool, so
563 that multiple threads can be created to serve asynchronous RPC requests.
564 The number of threads spawned is controlled at module load time using
565 module options. By default one thread is spawned per CPU, with a minimum
566 of 2 threads spawned irrespective of module options.</para>
567 <para>One of the issues with thread operations is the cost of moving a
568 thread context from one CPU to another with the resulting loss of CPU
569 cache warmth. To reduce this cost, ptlrpc threads can be bound to a CPU.
570 However, if the CPUs are busy, a bound thread may not be able to respond
571 quickly, as the bound CPU may be busy with other tasks and the thread
572 must wait to schedule.</para>
573 <para>Because of these considerations, the pool of ptlrpc threads can be
574 a mixture of bound and unbound threads. The system operator can balance
575 the thread mixture based on system size and workload.</para>
577 <title>ptlrpcd parameters</title>
578 <para>These parameters should be set in
579 <literal>/etc/modprobe.conf</literal> or in the
580 <literal>etc/modprobe.d</literal> directory, as options for the ptlrpc
583 options ptlrpcd max_ptlrpcds=XXX
585 <para>Sets the number of ptlrpcd threads created at module load time.
586 The default if not specified is one thread per CPU, including
587 hyper-threaded CPUs. The lower bound is 2 (old prlrpcd behaviour)
589 options ptlrpcd ptlrpcd_bind_policy=[1-4]
591 <para>Controls the binding of threads to CPUs. There are four policy
596 <literal role="bold">
597 PDB_POLICY_NONE</literal>(ptlrpcd_bind_policy=1) All threads are
602 <literal role="bold">
603 PDB_POLICY_FULL</literal>(ptlrpcd_bind_policy=2) All threads
604 attempt to bind to a CPU.</para>
608 <literal role="bold">
609 PDB_POLICY_PAIR</literal>(ptlrpcd_bind_policy=3) This is the
610 default policy. Threads are allocated as a bound/unbound pair. Each
611 thread (bound or free) has a partner thread. The partnering is used
612 by the ptlrpcd load policy, which determines how threads are
613 allocated to CPUs.</para>
617 <literal role="bold">
618 PDB_POLICY_NEIGHBOR</literal>(ptlrpcd_bind_policy=4) Threads are
619 allocated as a bound/unbound pair. Each thread (bound or free) has
620 two partner threads.</para>