1 <?xml version='1.0' encoding='utf-8'?>
2 <chapter xmlns="http://docbook.org/ns/docbook"
3 xmlns:xl="http://www.w3.org/1999/xlink" version="5.0" xml:lang="en-US"
4 xml:id="managingfilesystemio">
5 <title xml:id="managingfilesystemio.title">Managing the File System and
7 <section xml:id="handling_full_ost">
10 <primary>I/O</primary>
13 <primary>I/O</primary>
14 <secondary>full OSTs</secondary>
15 </indexterm>Handling Full OSTs</title>
16 <para>Sometimes a Lustre file system becomes unbalanced, often due to
17 incorrectly-specified stripe settings, or when very large files are created
18 that are not striped over all of the OSTs. Lustre will automatically avoid
19 allocating new files on OSTs that are full. If an OST is completely full and
20 more data is written to files already located on that OST, an error occurs.
21 The procedures below describe how to handle a full OST.</para>
22 <para>The MDS will normally handle space balancing automatically at file
23 creation time, and this procedure is normally not needed, but manual data
24 migration may be desirable in some cases (e.g. creating very large files
25 that would consume more than the total free space of the full OSTs).</para>
29 <primary>I/O</primary>
30 <secondary>OST space usage</secondary>
31 </indexterm>Checking OST Space Usage</title>
32 <para>The example below shows an unbalanced file system:</para>
35 UUID bytes Used Available \
37 testfs-MDT0000_UUID 4.4G 214.5M 3.9G \
39 testfs-OST0000_UUID 2.0G 751.3M 1.1G \
40 37% /mnt/testfs[OST:0]
41 testfs-OST0001_UUID 2.0G 755.3M 1.1G \
42 37% /mnt/testfs[OST:1]
43 testfs-OST0002_UUID 2.0G 1.7G 155.1M \
44 86% /mnt/testfs[OST:2] ****
45 testfs-OST0003_UUID 2.0G 751.3M 1.1G \
46 37% /mnt/testfs[OST:3]
47 testfs-OST0004_UUID 2.0G 747.3M 1.1G \
48 37% /mnt/testfs[OST:4]
49 testfs-OST0005_UUID 2.0G 743.3M 1.1G \
50 36% /mnt/testfs[OST:5]
52 filesystem summary: 11.8G 5.4G 5.8G \
55 <para>In this case, OST0002 is almost full and when an attempt is made to
56 write additional information to the file system (even with uniform
57 striping over all the OSTs), the write command fails as follows:</para>
59 client# lfs setstripe /mnt/testfs 4M 0 -1
60 client# dd if=/dev/zero of=/mnt/testfs/test_3 bs=10M count=100
61 dd: writing '/mnt/testfs/test_3': No space left on device
64 1017192448 bytes (1.0 GB) copied, 23.2411 seconds, 43.8 MB/s
70 <primary>I/O</primary>
71 <secondary>disabling OST creates</secondary>
72 </indexterm>Disabling creates on a Full OST</title>
73 <para>To avoid running out of space in the file system, if the OST usage
74 is imbalanced and one or more OSTs are close to being full while there
75 are others that have a lot of space, the MDS will typically avoid file
76 creation on the full OST(s) automatically. The full OSTs may optionally
77 be deactivated manually on the MDS to ensure the MDS will not allocate
78 new objects there.</para>
81 <para>Log into the MDS server and use the <literal>lctl</literal>
82 command to stop new object creation on the full OST(s):
85 mds# lctl set_param osp.<replaceable>fsname</replaceable>-OST<replaceable>nnnn</replaceable>*.max_create_count=0
89 <para>When new files are created in the file system, they will only use
90 the remaining OSTs. Either manual space rebalancing can be done by
91 migrating data to other OSTs, as shown in the next section, or normal
92 file deletion and creation can passively rebalance the space usage.</para>
97 <primary>I/O</primary>
98 <secondary>migrating data</secondary>
101 <primary>maintenance</primary>
102 <secondary>full OSTs</secondary>
103 </indexterm>Migrating Data within a File System</title>
105 <para>If there is a need to move the file data from the current
106 OST(s) to new OST(s), the data must be migrated (copied)
107 to the new location. The simplest way to do this is to use the
108 <literal>lfs_migrate</literal> command, as described in
109 <xref linkend="lustremaint.adding_new_ost" />.</para>
114 <primary>I/O</primary>
115 <secondary>bringing OST online</secondary>
118 <primary>maintenance</primary>
119 <secondary>bringing OST online</secondary>
120 </indexterm>Returning an Inactive OST Back Online</title>
121 <para>Once the full OST(s) no longer are severely imbalanced, due
122 to either active or passive data redistribution, they should be
123 reactivated so they will again have new files allocated on them.</para>
125 [mds]# lctl set_param osp.testfs-OST0002.max_create_count=20000
130 <primary>migrating metadata</primary>
131 </indexterm>Migrating Metadata within a Filesystem</title>
132 <section remap="h3" condition='l28'>
134 <primary>migrating metadata</primary>
135 </indexterm>Whole Directory Migration</title>
136 <para>Lustre software version 2.8 includes a feature
137 to migrate metadata (directories and inodes therein) between MDTs.
138 This migration can only be performed on whole directories. Striped
139 directories are not supported until Lustre 2.12. For example, to
140 migrate the contents of the <literal>/testfs/remotedir</literal>
141 directory from the MDT where it currently is located to MDT0000 to
142 allow that MDT to be removed, the sequence of commands is as follows:
145 $ lfs getdirstripe -m ./remotedir <lineannotation>which MDT is dir on?</lineannotation>
147 $ touch ./remotedir/file.{1,2,3}.txt<lineannotation>create test files</lineannotation>
148 $ lfs getstripe -m ./remotedir/file.*.txt<lineannotation>check files are on MDT0001</lineannotation>
152 $ lfs migrate -m 0 ./remotedir <lineannotation>migrate testremote to MDT0000</lineannotation>
153 $ lfs getdirstripe -m ./remotedir <lineannotation>which MDT is dir on now?</lineannotation>
155 $ lfs getstripe -m ./remotedir/file.*.txt<lineannotation>check files are on MDT0000</lineannotation>
159 <para>For more information, see <literal>man lfs-migrate</literal>.
161 <warning><para>During migration each file receives a new identifier
162 (FID). As a consequence, the file will report a new inode number to
163 userspace applications. Some system tools (for example, backup and
164 archiving tools, NFS, Samba) that identify files by inode number may
165 consider the migrated files to be new, even though the contents are
166 unchanged. If a Lustre system is re-exporting to NFS, the migrated
167 files may become inaccessible during and after migration if the
168 client or server are caching a stale file handle with the old FID.
169 Restarting the NFS service will flush the local file handle cache,
170 but clients may also need to be restarted as they may cache stale
171 file handles as well.
174 <section remap="h3" condition='l2C'>
176 <primary>migrating metadata</primary>
177 </indexterm>Striped Directory Migration</title>
178 <para>Lustre 2.8 included a feature to migrate metadata (directories
179 and inodes therein) between MDTs, however it did not support migration
180 of striped directories, or changing the stripe count of an existing
181 directory. Lustre 2.12 adds support for migrating and restriping
182 directories. The <literal>lfs migrate -m</literal> command can only
183 only be performed on whole directories, though it will migrate both
184 the specified directory and its sub-entries recursively.
185 For example, to migrate the contents of a large directory
186 <literal>/testfs/largedir</literal> from its current location on
187 MDT0000 to MDT0001 and MDT0003, run the following command:</para>
188 <screen>$ lfs migrate -m 1,3 /testfs/largedir</screen>
189 <para>Metadata migration will migrate file dirent and inode to other
190 MDTs, but it won't touch file data. During migration, directory and
191 its sub-files can be accessed like normal ones, though the same
192 warning above applies to tools that depend on the file inode number.
193 Migration may fail for various reasons such as MDS restart, or disk
194 full. In those cases, some of the sub-files may have been migrated to
195 the new MDTs, while others are still on the original MDT. The files
196 can be accessed normally. The same <literal>lfs migrate -m</literal>
197 command should be executed again when these issues are fixed to finish
198 this migration. However, you cannot abort a failed migration, or
199 migrate to different MDTs from previous migration command.</para>
201 <section remap="h3" condition='l2C'>
203 <primary>migrating metadata</primary>
204 </indexterm>Directory Restriping</title>
205 <para>Lustre 2.14 includs a feature to change the stripe count of an
206 existing directory. The <literal>lfs setdirstripe -c</literal> command
207 can be performed on an existing directory to change its stripe count.
208 For example, a directory <literal>/testfs/testdir</literal> is becoming
209 large, run the following command to increase its stripe count to
210 <literal>2</literal>:</para>
211 <screen>$ lfs setdirstripe -c 2 /testfs/testdir</screen>
212 <para>By default directory restriping will migrate sub-file dirents only,
213 but it won't move inodes. To enable moving both dirents and inodes, run
214 the following command on all MDS's:</para>
215 <screen>mds$ lctl set_param mdt.*.dir_restripe_nsonly=0</screen>
216 <para>It's not allowed to specify MDTs in directory restriping, instead
217 server will pick MDTs for the added stripes by space and inode usages.
218 During restriping, directory and its sub-files can be accessed like
219 normal ones, which is the same as directory migration. Similarly you
220 cannot abort a failed restriping, and server will resume the failed
221 restriping automatically when it notices an unfinished restriping.</para>
223 <section remap="h3" condition='l2C'>
225 <primary>migrating metadata</primary>
226 </indexterm>Directory Auto-Split</title>
227 <para>Lustre 2.14 includs a feature to automatically increase the stripe
228 count of a directory when it becomes large. This can be enabled by the
229 following command:</para>
230 <screen>mds$ lctl set_param mdt.*.enable_dir_auto_split=1</screen>
231 <para>The sub file count that triggers directory auto-split is 50k, and
232 it can be changed by the following command:</para>
233 <screen>mds$ lctl set_param mdt.*.dir_split_count=value</screen>
234 <para>The directory stripe count will be increased from 0 to 4 if it's a
235 plain directory, and from 4 to 8 upon the second split, and so on.
236 However the final stripe count won't exceed total MDT count, and it will
237 stop splitting when it's distributed among all MDTs. This delta value
238 can be changed by the following command:</para>
239 <screen>mds$ lctl set_param mdt.*.dir_split_delta=value</screen>
243 <section xml:id="managingfilesystemio.managing_ost_pools">
246 <primary>I/O</primary>
247 <secondary>pools</secondary>
250 <primary>maintenance</primary>
251 <secondary>pools</secondary>
254 <primary>pools</primary>
255 </indexterm>Creating and Managing OST Pools</title>
256 <para>The OST pools feature enables users to group OSTs together to make
257 object placement more flexible. A 'pool' is the name associated with an
258 arbitrary subset of OSTs in a Lustre cluster.</para>
259 <para>OST pools follow these rules:</para>
262 <para>An OST can be a member of multiple pools.</para>
265 <para>No ordering of OSTs in a pool is defined or implied.</para>
268 <para>Stripe allocation within a pool follows the same rules as the
269 normal stripe allocator.</para>
272 <para>OST membership in a pool is flexible, and can change over
276 <para>When an OST pool is defined, it can be used to allocate files. When
277 file or directory striping is set to a pool, only OSTs in the pool are
278 candidates for striping. If a stripe_index is specified which refers to an
279 OST that is not a member of the pool, an error is returned.</para>
280 <para>OST pools are used only at file creation. If the definition of a pool
281 changes (an OST is added or removed or the pool is destroyed),
282 already-created files are not affected.</para>
285 <literal>EINVAL</literal>) results if you create a file using an empty
289 <para>If a directory has pool striping set and the pool is subsequently
290 removed, the new files created in this directory have the (non-pool)
291 default striping pattern for that directory applied and no error is
295 <title>Working with OST Pools</title>
296 <para>OST pools are defined in the configuration log on the MGS. Use the
297 lctl command to:</para>
300 <para>Create/destroy a pool</para>
303 <para>Add/remove OSTs in a pool</para>
306 <para>List pools and OSTs in a specific pool</para>
309 <para>The lctl command MUST be run on the MGS. Another requirement for
310 managing OST pools is to either have the MDT and MGS on the same node or
311 have a Lustre client mounted on the MGS node, if it is separate from the
312 MDS. This is needed to validate the pool commands being run are
316 <literal>writeconf</literal> command on the MDS erases all pools
317 information (as well as any other parameters set using
318 <literal>lctl conf_param</literal>). We recommend that the pools
320 <literal>conf_param</literal> settings) be executed using a script, so
321 they can be reproduced easily after a
322 <literal>writeconf</literal> is performed.</para>
324 <para>To create a new pool, run:</para>
326 mgs# lctl pool_new <replaceable>fsname</replaceable>.<replaceable>poolname</replaceable>
329 <para>The pool name is an ASCII string up to 15 characters.</para>
331 <para>To add the named OST to a pool, run:</para>
333 mgs# lctl pool_add <replaceable>fsname</replaceable>.<replaceable>poolname</replaceable> <replaceable>ost_list</replaceable>
340 <replaceable>ost_list</replaceable>is
341 <replaceable>fsname</replaceable>-OST
342 <replaceable>index_range</replaceable></literal>
348 <replaceable>index_range</replaceable>is
349 <replaceable>ost_index_start</replaceable>-
350 <replaceable>ost_index_end[,index_range]</replaceable></literal> or
352 <replaceable>ost_index_start</replaceable>-
353 <replaceable>ost_index_end/step</replaceable></literal></para>
358 <replaceable>fsname</replaceable>
359 </literal> and/or ending
360 <literal>_UUID</literal> are missing, they are automatically added.</para>
361 <para>For example, to add even-numbered OSTs to
362 <literal>pool1</literal> on file system
363 <literal>testfs</literal>, run a single command (
364 <literal>pool_add</literal>) to add many OSTs to the pool at one
368 lctl pool_add testfs.pool1 OST[0-10/2]
372 <para>Each time an OST is added to a pool, a new
373 <literal>llog</literal> configuration record is created. For
374 convenience, you can run a single command.</para>
376 <para>To remove a named OST from a pool, run:</para>
378 mgs# lctl pool_remove
379 <replaceable>fsname</replaceable>.
380 <replaceable>poolname</replaceable>
381 <replaceable>ost_list</replaceable>
383 <para>To destroy a pool, run:</para>
385 mgs# lctl pool_destroy
386 <replaceable>fsname</replaceable>.
387 <replaceable>poolname</replaceable>
390 <para>All OSTs must be removed from a pool before it can be
393 <para>To list pools in the named file system, run:</para>
396 <replaceable>fsname|pathname</replaceable>
398 <para>To list OSTs in a named pool, run:</para>
401 <replaceable>fsname</replaceable>.
402 <replaceable>poolname</replaceable>
405 <title>Using the lfs Command with OST Pools</title>
406 <para>Several lfs commands can be run with OST pools. Use the
407 <literal>lfs setstripe</literal> command to associate a directory with
408 an OST pool. This causes all new regular files and directories in the
409 directory to be created in the pool. The lfs command can be used to
410 list pools in a file system and OSTs in a named pool.</para>
411 <para>To associate a directory with a pool, so all new files and
412 directories will be created in the pool, run:</para>
414 client# lfs setstripe --pool|-p pool_name
415 <replaceable>filename|dirname</replaceable>
417 <para>To set striping patterns, run:</para>
419 client# lfs setstripe [--size|-s stripe_size] [--offset|-o start_ost]
420 [--stripe-count|-c stripe_count] [--overstripe-count|-C stripe_count]
421 [--pool|-p pool_name]
423 <replaceable>dir|filename</replaceable>
426 <para>If you specify striping with an invalid pool name, because the
427 pool does not exist or the pool name was mistyped,
428 <literal>lfs setstripe</literal> returns an error. Run
429 <literal>lfs pool_list</literal> to make sure the pool exists and the
430 pool name is entered correctly.</para>
434 <literal>--pool</literal> option for lfs setstripe is compatible with
435 other modifiers. For example, you can set striping on a directory to
436 use an explicit starting index.</para>
438 <note condition='l2G'>
439 <para>There are several reserved pool keywords:</para>
443 <emphasis role="bold">
444 <literal>--pool '' or --pool inherit</literal></emphasis>
445 to force a component to inherit the pool from the parent or
446 root directory instead of the previous PFL's component (see
447 <xref linkend="pfl" />).</para>
451 <emphasis role="bold">
452 <literal>--pool ignore</literal></emphasis>
453 to force creation of a file or a PFL's component without a pool
454 set (no inheritance from last component, root or parent).
464 <primary>pools</primary>
465 <secondary>usage tips</secondary>
466 </indexterm>Tips for Using OST Pools</title>
467 <para>Here are several suggestions for using OST pools.</para>
470 <para>A directory or file can be given an extended attribute (EA),
471 that restricts striping to a pool.</para>
474 <para>Pools can be used to group OSTs with the same technology or
475 performance (slower or faster), or that are preferred for certain
476 jobs. Examples are SATA OSTs versus SAS OSTs or remote OSTs versus
480 <para>A file created in an OST pool tracks the pool by keeping the
481 pool name in the file LOV EA.</para>
486 <section xml:id="adding_ost">
489 <primary>I/O</primary>
490 <secondary>adding an OST</secondary>
491 </indexterm>Adding an OST to a Lustre File System</title>
492 <para>To add an OST to existing Lustre file system:</para>
495 <para>Add a new OST by passing on the following commands, run:</para>
497 oss# mkfs.lustre --fsname=testfs --mgsnode=mds16@tcp0 --ost --index=12 /dev/sda
498 oss# mkdir -p /mnt/testfs/ost12
499 oss# mount -t lustre /dev/sda /mnt/testfs/ost12
503 <para>Migrate the data (possibly).</para>
504 <para>The file system is quite unbalanced when new empty OSTs are
505 added. New file creations are automatically balanced. If this is a
506 scratch file system or files are pruned at a regular interval, then no
507 further work may be needed. Files existing prior to the expansion can
508 be rebalanced with an in-place copy, which can be done with a simple
510 <para>The basic method is to copy existing files to a temporary file,
511 then move the temp file over the old one. This should not be attempted
512 with files which are currently being written to by users or
513 applications. This operation redistributes the stripes over the entire
515 <para>A very clever migration script would do the following:</para>
518 <para>Examine the current distribution of data.</para>
521 <para>Calculate how much data should move from each full OST to the
525 <para>Search for files on a given full OST (using
526 <literal>lfs getstripe</literal>).</para>
529 <para>Force the new destination OST (using
530 <literal>lfs setstripe</literal>).</para>
533 <para>Copy only enough files to address the imbalance.</para>
538 <para>If a Lustre file system administrator wants to explore this approach
539 further, per-OST disk-usage statistics can be found in the
540 <literal>osc.*.rpc_stats</literal> parameter file.</para>
542 <section xml:id="performing_directio">
545 <primary>I/O</primary>
546 <secondary>direct</secondary>
547 </indexterm>Performing Direct I/O</title>
548 <para>The Lustre software supports the
549 <literal>O_DIRECT</literal> flag to open.</para>
550 <para>Applications using the
551 <literal>read()</literal> and
552 <literal>write()</literal> calls must supply buffers aligned on a page
553 boundary (usually 4 K). If the alignment is not correct, the call returns
554 <literal>-EINVAL</literal>. Direct I/O may help performance in cases where
555 the client is doing a large amount of I/O and is CPU-bound (CPU utilization
558 <title>Making File System Objects Immutable</title>
559 <para>An immutable file or directory is one that cannot be modified,
560 renamed or removed. To do this:</para>
563 <replaceable>file</replaceable>
565 <para>To remove this flag, use
566 <literal>chattr -i</literal></para>
569 <section xml:id="other_io_options">
570 <title>Other I/O Options</title>
571 <para>This section describes other I/O options, including checksums, and
572 the ptlrpcd thread pool.</para>
574 <title>Lustre Checksums</title>
575 <para>To guard against network data corruption, a Lustre client can
576 perform two types of data checksums: in-memory (for data in client
577 memory) and wire (for data sent over the network). For each checksum
578 type, a 32-bit checksum of the data read or written on both the client
579 and server is computed, to ensure that the data has not been corrupted in
580 transit over the network. The
581 <literal>ldiskfs</literal> backing file system does NOT do any persistent
582 checksumming, so it does not detect corruption of data in the OST file
584 <para>The checksumming feature is enabled, by default, on individual
585 client nodes. If the client or OST detects a checksum mismatch, then an
586 error is logged in the syslog of the form:</para>
588 LustreError: BAD WRITE CHECKSUM: changed in transit before arrival at OST: \
589 from 192.168.1.1@tcp inum 8991479/2386814769 object 1127239/0 extent [10240\
592 <para>If this happens, the client will re-read or re-write the affected
593 data up to five times to get a good copy of the data over the network. If
594 it is still not possible, then an I/O error is returned to the
596 <para>To enable both types of checksums (in-memory and wire), run:</para>
598 lctl set_param llite.*.checksum_pages=1
600 <para>To disable both types of checksums (in-memory and wire),
603 lctl set_param llite.*.checksum_pages=0
605 <para>To check the status of a wire checksum, run:</para>
607 lctl get_param osc.*.checksums
610 <title>Changing Checksum Algorithms</title>
611 <para>By default, the Lustre software uses the adler32 checksum
612 algorithm, because it is robust and has a lower impact on performance
613 than crc32. The Lustre file system administrator can change the
614 checksum algorithm via
615 <literal>lctl get_param</literal>, depending on what is supported in
617 <para>To check which checksum algorithm is being used by the Lustre
618 software, run:</para>
620 $ lctl get_param osc.*.checksum_type
622 <para>To change the wire checksum algorithm, run:</para>
624 $ lctl set_param osc.*.checksum_type=
625 <replaceable>algorithm</replaceable>
628 <para>The in-memory checksum always uses the adler32 algorithm, if
629 available, and only falls back to crc32 if adler32 cannot be
632 <para>In the following example, the
633 <literal>lctl get_param</literal> command is used to determine that the
634 Lustre software is using the adler32 checksum algorithm. Then the
635 <literal>lctl set_param</literal> command is used to change the checksum
636 algorithm to crc32. A second
637 <literal>lctl get_param</literal> command confirms that the crc32
638 checksum algorithm is now in use.</para>
640 $ lctl get_param osc.*.checksum_type
641 osc.testfs-OST0000-osc-ffff81012b2c48e0.checksum_type=crc32 [adler]
642 $ lctl set_param osc.*.checksum_type=crc32
643 osc.testfs-OST0000-osc-ffff81012b2c48e0.checksum_type=crc32
644 $ lctl get_param osc.*.checksum_type
645 osc.testfs-OST0000-osc-ffff81012b2c48e0.checksum_type=[crc32] adler
650 <title>PtlRPC Client Thread Pool</title>
651 <para>The use of large SMP nodes for Lustre clients
652 requires significant parallelism within the kernel to avoid
653 cases where a single CPU would be 100% utilized and other CPUs would be
654 relativity idle. This is especially noticeable when a single thread
655 traverses a large directory.</para>
656 <para>The Lustre client implements a PtlRPC daemon thread pool, so that
657 multiple threads can be created to serve asynchronous RPC requests, even
658 if only a single userspace thread is running. The number of ptlrpcd
659 threads spawned is controlled at module load time using module options.
660 By default two service threads are spawned per CPU socket.</para>
661 <para>One of the issues with thread operations is the cost of moving a
662 thread context from one CPU to another with the resulting loss of CPU
663 cache warmth. To reduce this cost, PtlRPC threads can be bound to a CPU.
664 However, if the CPUs are busy, a bound thread may not be able to respond
665 quickly, as the bound CPU may be busy with other tasks and the thread
666 must wait to schedule.</para>
667 <para>Because of these considerations, the pool of ptlrpcd threads can be
668 a mixture of bound and unbound threads. The system operator can balance
669 the thread mixture based on system size and workload.</para>
671 <title>ptlrpcd parameters</title>
672 <para>These parameters should be set in
673 <literal>/etc/modprobe.conf</literal> or in the
674 <literal>etc/modprobe.d</literal> directory, as options for the ptlrpc
677 options ptlrpcd ptlrpcd_per_cpt_max=XXX
679 <para>Sets the number of ptlrpcd threads created per socket.
680 The default if not specified is two threads per CPU socket, including
681 hyper-threaded CPUs. The lower bound is 2 threads per socket.
683 options ptlrpcd ptlrpcd_bind_policy=[1-4]
685 <para>Controls the binding of threads to CPUs. There are four policy
690 <literal role="bold">
691 PDB_POLICY_NONE</literal>(ptlrpcd_bind_policy=1) All threads are
696 <literal role="bold">
697 PDB_POLICY_FULL</literal>(ptlrpcd_bind_policy=2) All threads
698 attempt to bind to a CPU.</para>
702 <literal role="bold">
703 PDB_POLICY_PAIR</literal>(ptlrpcd_bind_policy=3) This is the
704 default policy. Threads are allocated as a bound/unbound pair. Each
705 thread (bound or free) has a partner thread. The partnering is used
706 by the ptlrpcd load policy, which determines how threads are
707 allocated to CPUs.</para>
711 <literal role="bold">
712 PDB_POLICY_NEIGHBOR</literal>(ptlrpcd_bind_policy=4) Threads are
713 allocated as a bound/unbound pair. Each thread (bound or free) has
714 two partner threads.</para>
722 vim:expandtab:shiftwidth=2:tabstop=8: