1 <?xml version="1.0" encoding="utf-8"?>
2 <glossary xmlns="http://docbook.org/ns/docbook"
3 xmlns:xl="http://www.w3.org/1999/xlink" version="5.0" xml:lang="en-US">
4 <title>Glossary</title>
7 <glossentry xml:id="acl">
8 <glossterm>ACL</glossterm>
10 <para>Access control list. An extended attribute associated with a file
11 that contains enhanced authorization directives.</para>
14 <glossentry xml:id="ostfail">
15 <glossterm>Administrative OST failure</glossterm>
17 <para>A manual configuration change to mark an OST as unavailable, so
18 that operations intended for that OST fail immediately with an I/O
19 error instead of waiting indefinitely for OST recovery to
26 <glossentry xml:id="completioncallback">
27 <glossterm>Completion callback</glossterm>
29 <para>An RPC made by the lock server on an OST or MDT to another
30 system, usually a client, to indicate that the lock is now
34 <glossentry xml:id="changelog">
35 <glossterm>configlog</glossterm>
37 <para>An llog file used in a node, or retrieved from a management
38 server over the network with configuration instructions for the Lustre
39 file system at startup time.</para>
42 <glossentry xml:id="configlock">
43 <glossterm>Configuration lock</glossterm>
45 <para>A lock held by every node in the cluster to control configuration
46 changes. When the configuration is changed on the MGS, it revokes this
47 lock from all nodes. When the nodes receive the blocking callback, they
48 quiesce their traffic, cancel and re-enqueue the lock and wait until it
49 is granted again. They can then fetch the configuration updates and
50 resume normal operation.</para>
56 <glossentry xml:id="defaultstripepattern">
57 <glossterm>Default stripe pattern</glossterm>
59 <para>Information in the LOV descriptor that describes the default
60 stripe count, stripe size, and layout pattern used for new files in a
61 file system. This can be amended by using a directory stripe descriptor
62 or a per-file stripe descriptor.</para>
65 <glossentry xml:id="directio">
66 <glossterm>Direct I/O</glossterm>
68 <para>A mechanism that can be used during read and write system calls
69 to avoid memory cache overhead for large I/O requests. It bypasses the
70 data copy between application and kernel memory, and avoids buffering
71 the data in the client memory.</para>
74 <glossentry xml:id="dirstripdesc">
75 <glossterm>Directory stripe descriptor</glossterm>
77 <para>An extended attribute that describes the default stripe pattern
78 for new files created within that directory. This is also inherited by
79 new subdirectories at the time they are created.</para>
82 <glossentry xml:id="DNE">
83 <glossterm>Distributed namespace (DNE)</glossterm>
85 <para>A collection of metadata targets serving a single file
86 system namespace. Without DNE, Lustre file systems are limited to a
87 single metadata target for the entire name space. Without the ability
88 to distribute metadata load over multiple targets, Lustre file system
89 performance may be limited. The DNE functionality has two types of
90 scalability. <emphasis>Remote Directories</emphasis> (DNE1) allows
91 sub-directories to be serviced by an independent MDT(s), increasing
92 aggregate metadata capacity and performance for independent sub-trees
93 of the filesystem. This also allows performance isolation of workloads
94 running in a specific sub-directory on one MDT from workloads on other
95 MDTs. In Lustre 2.8 and later <emphasis>Striped Directories</emphasis>
96 (DNE2) allows a single directory to be serviced by multiple MDTs.
103 <glossentry xml:id="ea">
104 <glossterm>EA</glossterm>
106 <para>Extended attribute. A small amount of data that can be retrieved
107 through a name (EA or attr) associated with a particular inode. A
108 Lustre file system uses EAs to store striping information (indicating
109 the location of file data on OSTs). Examples of extended attributes are
110 ACLs, striping information, and the FID of the file.</para>
113 <glossentry xml:id="eviction">
114 <glossterm>Eviction</glossterm>
116 <para>The process of removing a client's state from the server if the
117 client is unresponsive to server requests after a timeout or if server
118 recovery fails. If a client is still running, it is required to flush
119 the cache associated with the server when it becomes aware that it has
123 <glossentry xml:id="export">
124 <glossterm>Export</glossterm>
126 <para>The state held by a server for a client that is sufficient to
127 transparently recover all in-flight operations when a single failure
132 <glossterm>Extent</glossterm>
134 <para>A range of contiguous bytes or blocks in a file that are
135 addressed by a {start, length} tuple instead of individual block
139 <glossentry xml:id="extendloc">
140 <glossterm>Extent lock</glossterm>
142 <para>An LDLM lock used by the OSC to protect an extent in a storage
143 object for concurrent control of read/write, file size acquisition, and
144 truncation operations.</para>
150 <glossentry xml:id="failback">
151 <glossterm>Failback</glossterm>
153 <para>The failover process in which the default active server regains
154 control from the backup server that had taken control of the
158 <glossentry xml:id="failoutost">
159 <glossterm>Failout OST</glossterm>
161 <para>An OST that is not expected to recover if it fails to answer
162 client requests. A failout OST can be administratively failed, thereby
163 enabling clients to return errors when accessing data on the failed OST
164 without making additional network requests or waiting for OST recovery
168 <glossentry xml:id="failover">
169 <glossterm>Failover</glossterm>
171 <para>The process by which a standby computer server system takes over
172 for an active computer server after a failure of the active node.
173 Typically, the standby computer server gains exclusive access to a
174 shared storage device between the two servers.</para>
177 <glossentry xml:id="fid">
178 <glossterm>FID</glossterm>
180 <para>Lustre File Identifier. A 128-bit file system-unique identifier
181 for a file or object in the file system. The FID structure contains a
182 unique 64-bit sequence number (see
183 <emphasis role="italic">FLDB</emphasis>), a 32-bit object ID (OID), and
184 a 32-bit version number. The sequence number is unique across all
185 Lustre targets (OSTs and MDTs).</para>
188 <glossentry xml:id="fileset">
189 <glossterm>Fileset</glossterm>
191 <para>A group of files that are defined through a directory that
192 represents the start point of a file system.</para>
195 <glossentry xml:id="fldb">
196 <glossterm>FLDB</glossterm>
198 <para>FID location database. This database maps a sequence of FIDs to a
199 specific target (MDT or OST), which manages the objects within the
200 sequence. The FLDB is cached by all clients and servers in the file
201 system, but is typically only modified when new servers are added to
202 the file system.</para>
205 <glossentry xml:id="flightgroup">
206 <glossterm>Flight group</glossterm>
208 <para>Group of I/O RPCs initiated by the OSC that are concurrently
209 queued or processed at the OST. Increasing the number of RPCs in flight
210 for high latency networks can increase throughput and reduce visible
211 latency at the client.</para>
217 <glossentry xml:id="glimpsecallback">
218 <glossterm>Glimpse callback</glossterm>
220 <para>An RPC made by an OST or MDT to another system (usually a client)
221 to indicate that a held extent lock should be surrendered. If the
222 system is using the lock, then the system should return the object size
223 and timestamps in the reply to the glimpse callback instead of
224 cancelling the lock. Glimpses are introduced to optimize the
225 acquisition of file attributes without introducing contention on an
232 <glossentry xml:id="import">
233 <glossterm>Import</glossterm>
235 <para>The state held held by the client for each target that it is
236 connected to. It holds server NIDs, connection state, and uncommitted
237 RPCs needed to fully recover a transaction sequence after a server
238 failure and restart.</para>
241 <glossentry xml:id="intentlock">
242 <glossterm>Intent lock</glossterm>
244 <para>A special Lustre file system locking operation in the Linux
245 kernel. An intent lock combines a request for a lock with the full
246 information to perform the operation(s) for which the lock was
247 requested. This offers the server the option of granting the lock or
248 performing the operation and informing the client of the operation
249 result without granting a lock. The use of intent locks enables
250 metadata operations (even complicated ones) to be implemented with a
251 single RPC from the client to the server.</para>
257 <glossentry xml:id="lbug">
258 <glossterm>LBUG</glossterm>
260 <para>A fatal error condition detected by the software that halts
261 execution of the kernel thread to avoid potential further corruption of
262 the system state. It is printed to the console log and triggers a dump
263 of the internal debug log. The system must be rebooted to clear this
267 <glossentry xml:id="ldlm">
268 <glossterm>LDLM</glossterm>
270 <para>Lustre Distributed Lock Manager.</para>
273 <glossentry xml:id="lfs">
274 <glossterm>lfs</glossterm>
276 <para>The Lustre file system command-line utility that allows end users
277 to interact with Lustre software features, such as setting or checking
278 file striping or per-target free space. For more details, see
279 <xref xmlns:xlink="http://www.w3.org/1999/xlink"
280 linkend="dbdoclet.50438206_94597" />.</para>
283 <glossentry xml:id="lfsck">
284 <glossterm>LFSCK</glossterm>
286 <para>Lustre file system check. A distributed version of a disk file
287 system checker. Normally,
288 <literal>lfsck</literal> does not need to be run, except when file
289 systems are damaged by events such as multiple disk failures and cannot
290 be recovered using file system journal recovery.</para>
293 <glossentry xml:id="llite">
294 <glossterm>llite</glossterm>
296 <para>Lustre lite. This term is in use inside code and in module names
297 for code that is related to the Linux client VFS interface.</para>
300 <glossentry xml:id="llog">
301 <glossterm>llog</glossterm>
303 <para>Lustre log. An efficient log data structure used internally by
304 the file system for storing configuration and distributed transaction
306 <literal>llog</literal> is suitable for rapid transactional appends of
307 records and cheap cancellation of records through a bitmap.</para>
310 <glossentry xml:id="llogcatalog">
311 <glossterm>llog catalog</glossterm>
313 <para>Lustre log catalog. An
314 <literal>llog</literal> with records that each point at an
315 <literal>llog</literal>. Catalogs were introduced to give
316 <literal>llogs</literal> increased scalability.
317 <literal>llogs</literal> have an originator which writes records and a
318 replicator which cancels records when the records are no longer
322 <glossentry xml:id="lmv">
323 <glossterm>LMV</glossterm>
325 <para>Logical metadata volume. A module that implements a DNE
326 client-side abstraction device. It allows a client to work with many
327 MDTs without changes to the llite module. The LMV code forwards
328 requests to the correct MDT based on name or directory striping
329 information and merges replies into a single result to pass back to the
331 <literal>llite</literal> layer that connects the Lustre file system with
332 Linux VFS, supports VFS semantics, and complies with POSIX interface
333 specifications.</para>
336 <glossentry xml:id="lnd">
337 <glossterm>LND</glossterm>
339 <para>Lustre network driver. A code module that enables LNet support
340 over particular transports, such as TCP and various kinds of InfiniBand
344 <glossentry xml:id="lnet">
345 <glossterm>LNet</glossterm>
347 <para>Lustre networking. A message passing network protocol capable of
348 running and routing through various physical layers. LNet forms the
349 underpinning of LNETrpc.</para>
352 <glossentry xml:id="lockclient">
353 <glossterm>Lock client</glossterm>
355 <para>A module that makes lock RPCs to a lock server and handles
356 revocations from the server.</para>
359 <glossentry xml:id="lockserver">
360 <glossterm>Lock server</glossterm>
362 <para>A service that is co-located with a storage target that manages
363 locks on certain objects. It also issues lock callback requests, calls
364 while servicing or, for objects that are already locked, completes lock
368 <glossentry xml:id="lov">
369 <glossterm>LOV</glossterm>
371 <para>Logical object volume. The object storage analog of a logical
372 volume in a block device volume management system, such as LVM or EVMS.
373 The LOV is primarily used to present a collection of OSTs as a single
374 device to the MDT and client file system drivers.</para>
377 <glossentry xml:id="lovdes">
378 <glossterm>LOV descriptor</glossterm>
380 <para>A set of configuration directives which describes which nodes are
381 OSS systems in the Lustre cluster and providing names for their
385 <glossentry xml:id="lustreclient">
386 <glossterm>Lustre client</glossterm>
388 <para>An operating instance with a mounted Lustre file system.</para>
391 <glossentry xml:id="lustrefile">
392 <glossterm>Lustre file</glossterm>
394 <para>A file in the Lustre file system. The implementation of a Lustre
395 file is through an inode on a metadata server that contains references
396 to a storage object on OSSs.</para>
402 <glossentry xml:id="mballoc">
403 <glossterm>mballoc</glossterm>
405 <para>Multi-block allocate. Functionality in ext4 that enables the
406 <literal>ldiskfs</literal> file system to allocate multiple blocks with
407 a single request to the block allocator.</para>
410 <glossentry xml:id="mdc">
411 <glossterm>MDC</glossterm>
413 <para>Metadata client. A Lustre client component that sends metadata
414 requests via RPC over LNet to the metadata target (MDT).</para>
417 <glossentry xml:id="mdd">
418 <glossterm>MDD</glossterm>
420 <para>Metadata disk device. Lustre server component that interfaces
421 with the underlying object storage device to manage the Lustre file
422 system namespace (directories, file ownership, attributes).</para>
425 <glossentry xml:id="mds">
426 <glossterm>MDS</glossterm>
428 <para>Metadata server. The server node that is hosting the metadata
432 <glossentry xml:id="mdt">
433 <glossterm>MDT</glossterm>
435 <para>Metadata target. A storage device containing the file system
436 namespace that is made available over the network to a client. It
437 stores filenames, attributes, and the layout of OST objects that store
438 the file data.</para>
441 <glossentry xml:id="mdt0">
442 <glossterm>MDT0000</glossterm>
444 <para>The metadata target storing the file system root directory, as
445 well as some core services such as quota tables. Multiple metadata
446 targets are possible in the same file system. MDT0000 must be
447 available for the file system to be accessible.</para>
450 <glossentry xml:id="mgs">
451 <glossterm>MGS</glossterm>
453 <para>Management service. A software module that manages the startup
454 configuration and changes to the configuration. Also, the server node
455 on which this system runs.</para>
458 <glossentry xml:id="mountconf">
459 <glossterm>mountconf</glossterm>
461 <para>The Lustre configuration protocol that formats disk file systems
463 <literal>mkfs.lustre</literal> program, and prepares them for automatic
464 incorporation into a Lustre cluster. This allows clients to be
465 configured and mounted with a simple
466 <literal>mount</literal> command.</para>
472 <glossentry xml:id="nid">
473 <glossterm>NID</glossterm>
475 <para>Network identifier. Encodes the type, network number, and network
476 address of a network interface on a node for use by the Lustre file
480 <glossentry xml:id="nioapi">
481 <glossterm>NIO API</glossterm>
483 <para>A subset of the LNet RPC module that implements a library for
484 sending large network requests, moving buffers with RDMA.</para>
487 <glossentry xml:id="nodeaffdef">
488 <glossterm>Node affinity</glossterm>
490 <para>Node affinity describes the property of a multi-threaded
491 application to behave sensibly on multiple cores. Without the property
492 of node affinity, an operating scheduler may move application threads
493 across processors in a sub-optimal way that significantly reduces
494 performance of the application overall.</para>
497 <glossentry xml:id="nrs">
498 <glossterm>NRS</glossterm>
500 <para>Network request scheduler. A subcomponent of the PTLRPC layer,
501 which specifies the order in which RPCs are handled at servers. This
502 allows optimizing large numbers of incoming requests for disk access
503 patterns, fairness between clients, and other administrator-selected
507 <glossentry xml:id="NUMAdef">
508 <glossterm>NUMA</glossterm>
510 <para>Non-uniform memory access. Describes a multi-processing
511 architecture where the time taken to access given memory differs
512 depending on memory location relative to a given processor. Typically
513 machines with multiple sockets are NUMA architectures.</para>
519 <glossentry xml:id="odb">
520 <glossterm>OBD</glossterm>
522 <para>Object-based device. The generic term for components in the
523 Lustre software stack that can be configured on the client or server.
524 Examples include MDC, OSC, LOV, MDT, and OST.</para>
527 <glossentry xml:id="odbtype">
528 <glossterm>OBD type</glossterm>
530 <para>Module that can implement the Lustre object or metadata APIs.
531 Examples of OBD types include the LOV, OSC and OSD.</para>
534 <glossentry xml:id="objectstorage">
535 <glossterm>Object storage</glossterm>
537 <para>Refers to a storage-device API or protocol involving storage
538 objects. The two most well known instances of object storage are the
539 T10 iSCSI storage object protocol and the Lustre object storage
540 protocol (a network implementation of the Lustre object API). The
541 principal difference between the Lustre protocol and T10 protocol is
542 that the Lustre protocol includes locking and recovery control in the
543 protocol and is not tied to a SCSI transport layer.</para>
546 <glossentry xml:id="opencache">
547 <glossterm>opencache</glossterm>
549 <para>A cache of open file handles. This is a performance enhancement
553 <glossentry xml:id="orphanobjects">
554 <glossterm>Orphan objects</glossterm>
556 <para>Storage objects to which no Lustre file points. Orphan objects
557 can arise from crashes and are automatically removed by an
558 <literal>llog</literal> recovery between the MDT and OST. When a client
559 deletes a file, the MDT unlinks it from the namespace. If this is the
560 last link, it will atomically add the OST objects into a per-OST
561 <literal>llog</literal>(if a crash has occurred) and then wait until
562 the unlink commits to disk. (At this point, it is safe to destroy the
563 OST objects. Once the destroy is committed, the MDT
564 <literal>llog</literal> records can be cancelled.)</para>
567 <glossentry xml:id="osc">
568 <glossterm>OSC</glossterm>
570 <para>Object storage client. The client module communicating to an OST
574 <glossentry xml:id="osd">
575 <glossterm>OSD</glossterm>
577 <para>Object storage device. A generic, industry term for storage
578 devices with a more extended interface than block-oriented devices such
579 as disks. For the Lustre file system, this name is used to describe a
580 software module that implements an object storage API in the kernel. It
581 is also used to refer to an instance of an object storage device
582 created by that driver. The OSD device is layered on a file system,
583 with methods that mimic create, destroy and I/O operations on file
587 <glossentry xml:id="oss">
588 <glossterm>OSS</glossterm>
590 <para>Object storage server. A server OBD that provides access to local
594 <glossentry xml:id="ost">
595 <glossterm>OST</glossterm>
597 <para>Object storage target. An OSD made accessible through a network
598 protocol. Typically, an OST is associated with a unique OSD which, in
599 turn is associated with a formatted disk file system on the server
600 containing the data objects.</para>
606 <glossentry xml:id="pdirops">
607 <glossterm>pdirops</glossterm>
609 <para>A locking protocol in the Linux VFS layer that allows for
610 directory operations performed in parallel.</para>
613 <glossentry xml:id="pool">
614 <glossterm>Pool</glossterm>
616 <para>OST pools allows the administrator to associate a name with an
617 arbitrary subset of OSTs in a Lustre cluster. A group of OSTs can be
618 combined into a named pool with unique access permissions and stripe
619 characteristics.</para>
622 <glossentry xml:id="portal">
623 <glossterm>Portal</glossterm>
625 <para>A service address on an LNet NID that binds requests to a
626 specific request service, such as an MDS, MGS, OSS, or LDLM. Services
627 may listen on multiple portals to ensure that high priority messages
628 are not queued behind many slow requests on another portal.</para>
631 <glossentry xml:id="ptlrpc">
632 <glossterm>PTLRPC</glossterm>
634 <para>An RPC protocol layered on LNet. This protocol deals with
635 stateful servers and has exactly-once semantics and built in support
642 <glossentry xml:id="recovery">
643 <glossterm>Recovery</glossterm>
645 <para>The process that re-establishes the connection state when a
646 client that was previously connected to a server reconnects after the
647 server restarts.</para>
650 <glossentry xml:id="remotedirectories">
651 <glossterm>Remote directory</glossterm>
653 <para>A remote directory describes a feature of Lustre where metadata
654 for files in a given directory may be stored on a different MDT than
655 the metadata for the parent directory. This is sometimes referred
659 <glossentry xml:id="replay">
660 <glossterm>Replay request</glossterm>
662 <para>The concept of re-executing a server request after the server has
663 lost information in its memory caches and shut down. The replay
664 requests are retained by clients until the server(s) have confirmed
665 that the data is persistent on disk. Only requests for which a client
666 received a reply and were assigned a transaction number by the server
667 are replayed. Requests that did not get a reply are resent.</para>
670 <glossentry xml:id="resent">
671 <glossterm>Resent request</glossterm>
673 <para>An RPC request sent from a client to a server that has not had a
674 reply from the server. This might happen if the request was lost on the
675 way to the server, if the reply was lost on the way back from the
676 server, or if the server crashes before or after processing the
677 request. During server RPC recovery processing, resent requests are
678 processed after replayed requests, and use the client RPC XID to
679 determine if the resent RPC request was already executed on the
683 <glossentry xml:id="revocation">
684 <glossterm>Revocation callback</glossterm>
686 <para>Also called a "blocking callback". An RPC request made by the
687 lock server (typically for an OST or MDT) to a lock client to revoke a
688 granted DLM lock.</para>
691 <glossentry xml:id="rootsquash">
692 <glossterm>Root squash</glossterm>
694 <para>A mechanism whereby the identity of a root user on a client
695 system is mapped to a different identity on the server to avoid root
696 users on clients from accessing or modifying root-owned files on the
697 servers. This does not prevent root users on the client from assuming
698 the identity of a non-root user, so should not be considered a robust
699 security mechanism. Typically, for management purposes, at least one
700 client system should not be subject to root squash.</para>
703 <glossentry xml:id="routing">
704 <glossterm>Routing</glossterm>
706 <para>LNet routing between different networks and LNDs.</para>
709 <glossentry xml:id="rpc">
710 <glossterm>RPC</glossterm>
712 <para>Remote procedure call. A network encoding of a request.</para>
718 <glossentry xml:id="stripe">
719 <glossterm>Stripe</glossterm>
721 <para>A contiguous, logical extent of a Lustre file written to a single
722 OST. Used synonymously with a single OST data object that makes up part
723 of a file visible to user applications.</para>
726 <glossentry xml:id="stripeddirectory" condition='l28'>
727 <glossterm>Striped Directory</glossterm>
729 <para>A striped directory is when metadata for files in a given
730 directory are distributed evenly over multiple MDTs. Striped directories
731 are only available in Lustre software version 2.8 or later.
732 A user can create a striped directory to increase metadata
733 performance of large directories by distributing the metadata
734 requests in a single directory over two or more MDTs.</para>
737 <glossentry xml:id="stridesize">
738 <glossterm>Stripe size</glossterm>
740 <para>The maximum number of bytes that will be written to an OST object
741 before the next object in a file's layout is used when writing
742 sequential data to a file. Once a full stripe has been written to each
743 of the objects in the layout, the first object will be written to again
744 in round-robin fashion.</para>
747 <glossentry xml:id="stripcount">
748 <glossterm>Stripe count</glossterm>
750 <para>The number of OSTs holding objects for a RAID0-striped Lustre
757 <glossentry xml:id="t10">
758 <glossterm>T10 object protocol</glossterm>
760 <para>An object storage protocol tied to the SCSI transport layer. The
761 Lustre file system does not use T10.</para>
767 <glossentry xml:id="widestriping">
768 <glossterm>Wide striping</glossterm>
770 <para>Strategy of using many OSTs to store stripes of a single file.
771 This obtains maximum bandwidth access to a single file through parallel
772 utilization of many OSTs. For more information about wide striping, see
774 <xref xmlns:xlink="http://www.w3.org/1999/xlink"
775 linkend="wide_striping" />.</para>