1 <?xml version="1.0" encoding="UTF-8"?>
2 <glossary xmlns="http://docbook.org/ns/docbook" xmlns:xl="http://www.w3.org/1999/xlink" version="5.0" xml:lang="en-US">
3 <title>Glossary</title>
6 <glossentry xml:id="acl">
7 <glossterm>ACL</glossterm>
9 <para>Access control list. An extended attribute associated with a file that contains
10 enhanced authorization directives.</para>
13 <glossentry xml:id="ostfail">
14 <glossterm>Administrative OST failure</glossterm>
16 <para>A manual configuration change to mark an OST as unavailable, so that operations
17 intended for that OST fail immediately with an I/O error instead of waiting indefinitely
18 for OST recovery to complete</para>
24 <glossentry xml:id="completioncallback">
25 <glossterm>Completion callback </glossterm>
27 <para>An RPC made by the lock server on an OST or MDT to another system, usually a client,
28 to indicate that the lock is now granted.</para>
31 <glossentry xml:id="changelog">
32 <glossterm>configlog </glossterm>
34 <para>An llog file used in a node, or retrieved from a management server over the network
35 with configuration instructions for the Lustre file system at startup time.</para>
38 <glossentry xml:id="configlock">
39 <glossterm>Configuration lock </glossterm>
41 <para>A lock held by every node in the cluster to control configuration changes. When the
42 configuration is changed on the MGS, it revokes this lock from all nodes. When the nodes
43 receive the blocking callback, they quiesce their traffic, cancel and re-enqueue the lock
44 and wait until it is granted again. They can then fetch the configuration updates and
45 resume normal operation.</para>
51 <glossentry xml:id="defaultstrippattern">
52 <glossterm>Default stripe pattern </glossterm>
54 <para>Information in the LOV descriptor that describes the default stripe count, stripe
55 size, and layout pattern used for new files in a file system. This can be amended by using
56 a directory stripe descriptor or a per-file stripe descriptor.</para>
59 <glossentry xml:id="directio">
60 <glossterm>Direct I/O </glossterm>
62 <para>A mechanism that can be used during read and write system calls to avoid memory cache
63 overhead for large I/O requests. It bypasses the data copy between application and kernel
64 memory, and avoids buffering the data in the client memory.</para>
67 <glossentry xml:id="dirstripdesc">
68 <glossterm>Directory stripe descriptor </glossterm>
70 <para>An extended attribute that describes the default stripe pattern for new files created
71 within that directory. This is also inherited by new subdirectories at the time they are
75 <glossentry xml:id="DNE">
76 <glossterm>Distributed namespace (DNE)</glossterm>
78 <para>A collection of metadata targets implementing a single file system namespace.</para>
84 <glossentry xml:id="ea">
88 <para>Extended attribute. A small amount of data that can be retrieved through a name (EA or
89 attr) associated with a particular inode. A Lustre file system uses EAs to store striping
90 information (indicating the location of file data on OSTs). Examples of extended
91 attributes are ACLs, striping information, and the FID of the file.</para>
94 <glossentry xml:id="eviction">
98 <para>The process of removing a client's state from the server if the client is unresponsive
99 to server requests after a timeout or if server recovery fails. If a client is still
100 running, it is required to flush the cache associated with the server when it becomes
101 aware that it has been evicted.</para>
104 <glossentry xml:id="export">
108 <para>The state held by a server for a client that is sufficient to transparently recover all in-flight operations when a single failure occurs.</para>
112 <glossterm>Extent </glossterm>
114 <para>A range of contiguous bytes or blocks in a file that are addressed by a {start,
115 length} tuple instead of individual block numbers.</para>
118 <glossentry xml:id="extendloc">
119 <glossterm>Extent lock </glossterm>
121 <para>An LDLM lock used by the OSC to protect an extent in a storage object for concurrent
122 control of read/write, file size acquisition, and truncation operations.</para>
128 <glossentry xml:id="failback">
132 <para> The failover process in which the default active server regains control from the
133 backup server that had taken control of the service.</para>
136 <glossentry xml:id="failoutost">
137 <glossterm>Failout OST
140 <para>An OST that is not expected to recover if it fails to answer client requests. A
141 failout OST can be administratively failed, thereby enabling clients to return errors when
142 accessing data on the failed OST without making additional network requests or waiting for
143 OST recovery to complete.</para>
146 <glossentry xml:id="failover">
150 <para>The process by which a standby computer server system takes over for an active computer server after a failure of the active node. Typically, the standby computer server gains exclusive access to a shared storage device between the two servers.</para>
153 <glossentry xml:id="fid">
157 <para> Lustre File Identifier. A 128-bit file system-unique identifier for a file or object
158 in the file system. The FID structure contains a unique 64-bit sequence number (see
159 <emphasis role="italic">FLDB</emphasis>), a 32-bit object ID (OID), and a 32-bit version
160 number. The sequence number is unique across all Lustre targets (OSTs and MDTs).</para>
163 <glossentry xml:id="fileset">
167 <para>A group of files that are defined through a directory that represents the start point
168 of a file system.</para>
171 <glossentry xml:id="fldb">
175 <para>FID location database. This database maps a sequence of FIDs to a specific target (MDT
176 or OST), which manages the objects within the sequence. The FLDB is cached by all clients
177 and servers in the file system, but is typically only modified when new servers are added
178 to the file system.</para>
181 <glossentry xml:id="flightgroup">
182 <glossterm>Flight group </glossterm>
184 <para>Group of I/O RPCs initiated by the OSC that are concurrently queued or processed at
185 the OST. Increasing the number of RPCs in flight for high latency networks can increase
186 throughput and reduce visible latency at the client.</para>
192 <glossentry xml:id="glimpsecallback">
193 <glossterm>Glimpse callback </glossterm>
195 <para>An RPC made by an OST or MDT to another system (usually a client) to indicate that a
196 held extent lock should be surrendered. If the system is using the lock, then the system
197 should return the object size and timestamps in the reply to the glimpse callback instead
198 of cancelling the lock. Glimpses are introduced to optimize the acquisition of file
199 attributes without introducing contention on an active lock.</para>
205 <glossentry xml:id="import">
209 <para>The state held held by the client for each target that it is connected to. It holds
210 server NIDs, connection state, and uncommitted RPCs needed to fully recover a transaction
211 sequence after a server failure and restart.</para>
214 <glossentry xml:id="intentlock">
215 <glossterm>Intent lock </glossterm>
217 <para>A special Lustre file system locking operation in the Linux kernel. An intent lock
218 combines a request for a lock with the full information to perform the operation(s) for
219 which the lock was requested. This offers the server the option of granting the lock or
220 performing the operation and informing the client of the operation result without granting
221 a lock. The use of intent locks enables metadata operations (even complicated ones) to be
222 implemented with a single RPC from the client to the server.</para>
228 <glossentry xml:id="lbug">
232 <para>A fatal error condition detected by the software that halts execution of the kernel
233 thread to avoid potential further corruption of the system state. It is printed to the
234 console log and triggers a dump of the internal debug log. The system must be rebooted to
235 clear this state.</para>
238 <glossentry xml:id="ldlm">
242 <para>Lustre Distributed Lock Manager.</para>
245 <glossentry xml:id="lfs">
249 <para>The Lustre File System command-line utility that allows end users to interact with
250 Lustre-specific features, such as setting or checking file striping or per-target free
251 space. For more details, see <xref xmlns:xlink="http://www.w3.org/1999/xlink"
252 linkend="dbdoclet.50438206_94597"/>.</para>
255 <glossentry xml:id="lfsck">
259 <para>Lustre file system check. A distributed version of a disk file system checker.
260 Normally, <literal>lfsck</literal> does not need to be run, except when file systems are
261 damaged by events such as multiple disk failures and cannot be recovered using file system
262 journal recovery.</para>
265 <glossentry xml:id="llite">
266 <glossterm>llite </glossterm>
268 <para>Lustre lite. This term is in use inside code and in module names for code that is
269 related to the Linux client VFS interface.</para>
272 <glossentry xml:id="llog">
273 <glossterm>llog </glossterm>
275 <para>Lustre log. An efficient log data structure used internally by the file system for
276 storing configuration and distributed transaction records. An <literal>llog</literal> is
277 suitable for rapid transactional appends of records and cheap cancellation of records
278 through a bitmap.</para>
281 <glossentry xml:id="llogcatalog">
282 <glossterm>llog catalog </glossterm>
284 <para>Lustre log catalog. An <literal>llog</literal> with records that each point at an
285 <literal>llog</literal>. Catalogs were introduced to give <literal>llogs</literal>
286 increased scalability. <literal>llogs</literal> have an originator which writes records
287 and a replicator which cancels records when the records are no longer needed.</para>
290 <glossentry xml:id="lmv">
294 <para>Logical metadata volume. A module that implements a DNE client-side abstraction
295 device. It allows a client to work with many MDTs without changes to the llite module. The
296 LMV code forwards requests to the correct MDT based on name or directory striping
297 information and merges replies into a single result to pass back to the higher
298 <literal>llite</literal> layer that connects the Lustre file system with Linux VFS,
299 supports VFS semantics, and complies with POSIX interface specifications.</para>
302 <glossentry xml:id="lnd">
306 <para>Lustre network driver. A code module that enables LNET support over particular
307 transports, such as TCP and various kinds of InfiniBand networks.</para>
310 <glossentry xml:id="lnet">
314 <para>Lustre Networking. A message passing network protocol capable of running and routing through various physical layers. LNET forms the underpinning of LNETrpc.</para>
317 <glossentry xml:id="lockclient">
318 <glossterm>Lock client </glossterm>
320 <para>A module that makes lock RPCs to a lock server and handles revocations from the
324 <glossentry xml:id="lockserver">
325 <glossterm>Lock server </glossterm>
327 <para>A service that is co-located with a storage target that manages locks on certain
328 objects. It also issues lock callback requests, calls while servicing or, for objects that
329 are already locked, completes lock requests.</para>
332 <glossentry xml:id="lov">
336 <para>Logical object volume. The object storage analog of a logical volume in a block device
337 volume management system, such as LVM or EVMS. The LOV is primarily used to present a
338 collection of OSTs as a single device to the MDT and client file system drivers.</para>
341 <glossentry xml:id="lovdes">
342 <glossterm>LOV descriptor
345 <para>A set of configuration directives which describes which nodes are OSS systems in the
346 Lustre cluster and providing names for their OSTs.</para>
349 <glossentry xml:id="lustreclient">
350 <glossterm>Lustre client
353 <para>An operating instance with a mounted Lustre file system.</para>
356 <glossentry xml:id="lustrefile">
357 <glossterm>Lustre file
360 <para>A file in the Lustre file system. The implementation of a Lustre file is through an
361 inode on a metadata server that contains references to a storage object on OSSs.</para>
367 <glossentry xml:id="mballoc">
368 <glossterm>mballoc </glossterm>
370 <para>Multi-block allocate. Functionality in ext4 that enables the
371 <literal>ldiskfs</literal> file system to allocate multiple blocks with a single request
372 to the block allocator. </para>
375 <glossentry xml:id="mdc">
379 <para>Metadata client. A Lustre client component that sends metadata requests via RPC over
380 LNET to the metadata target (MDT).</para>
383 <glossentry xml:id="mdd">
387 <para>Metadata disk device. Lustre server component that interfaces with the underlying
388 object storage device to manage the Lustre file system namespace (directories, file
389 ownership, attributes).</para>
392 <glossentry xml:id="mds">
396 <para>Metadata server. The server node that is hosting the metadata target (MDT).</para>
399 <glossentry xml:id="mdt">
403 <para>Metadata target. A storage device containing the file system namespace that is made
404 available over the network to a client. It stores filenames, attributes, and the layout of
405 OST objects that store the file data.</para>
408 <glossentry xml:id="mdt0" condition='l24'>
412 <para>The metadata target for the file system root. Since Lustre release 2.4, multiple
413 metadata targets are possible in the same file system. MDT0 is the root of the file
414 system, which must be available for the file system to be accessible.</para>
417 <glossentry xml:id="mgs">
421 <para>Management service. A software module that manages the startup configuration and
422 changes to the configuration. Also, the server node on which this system runs.</para>
425 <glossentry xml:id="mountconf">
426 <glossterm>mountconf </glossterm>
428 <para>The Lustre configuration protocol that formats disk file systems on servers with the
429 <literal>mkfs.lustre</literal> program, and prepares them for automatic incorporation
430 into a Lustre cluster. This allows clients to be configured and mounted with a simple
431 <literal>mount</literal> command.</para>
437 <glossentry xml:id="nid">
441 <para>Network identifier. Encodes the type, network number, and network address of a network
442 interface on a node for use by the Lustre file system.</para>
445 <glossentry xml:id="nioapi">
449 <para>A subset of the LNET RPC module that implements a library for sending large network requests, moving buffers with RDMA.</para>
452 <glossentry xml:id="nodeaffdef">
453 <glossterm>Node affinity </glossterm>
455 <para>Node affinity describes the property of a multi-threaded application to behave
456 sensibly on multiple cores. Without the property of node affinity, an operating scheduler
457 may move application threads across processors in a sub-optimal way that significantly
458 reduces performance of the application overall.</para>
461 <glossentry xml:id="nrs">
465 <para>Network request scheduler. A subcomponent of the PTLRPC layer, which specifies the
466 order in which RPCs are handled at servers. This allows optimizing large numbers of
467 incoming requests for disk access patterns, fairness between clients, and other
468 administrator-selected policies.</para>
471 <glossentry xml:id="NUMAdef">
475 <para>Non-uniform memory access. Describes a multi-processing architecture where the time
476 taken to access given memory differs depending on memory location relative to a given
477 processor. Typically machines with multiple sockets are NUMA architectures.</para>
483 <glossentry xml:id="odb">
484 <glossterm>OBD </glossterm>
486 <para>Object-based device. The generic term for components in the Lustre software stack that
487 can be configured on the client or server. Examples include MDC, OSC, LOV, MDT, and
491 <glossentry xml:id="odbapi">
492 <glossterm>OBD API </glossterm>
494 <para>The programming interface for configuring OBD devices. This was formerly also the API
495 for accessing object IO and attribute methods on both the client and server, but has been
496 replaced by the OSD API in most parts of the code.</para>
499 <glossentry xml:id="odbtype">
500 <glossterm>OBD type </glossterm>
502 <para>Module that can implement the Lustre object or metadata APIs. Examples of OBD types
503 include the LOV, OSC and OSD.</para>
506 <glossentry xml:id="odbfilter">
507 <glossterm>Obdfilter </glossterm>
509 <para>An older name for the OBD API data object operation device driver that sits between
510 the OST and the OSD. In Lustre 2.4 this device has been renamed OFD."</para>
513 <glossentry xml:id="objectstorage">
514 <glossterm>Object storage </glossterm>
516 <para>Refers to a storage-device API or protocol involving storage objects. The two most
517 well known instances of object storage are the T10 iSCSI storage object protocol and the
518 Lustre object storage protocol (a network implementation of the Lustre object API). The
519 principal difference between the Lustre and T10 protocols is that the Lustre protocol
520 includes locking and recovery control in the protocol and is not tied to a SCSI transport
524 <glossentry xml:id="opencache">
525 <glossterm>opencache </glossterm>
527 <para>A cache of open file handles. This is a performance enhancement for NFS.</para>
530 <glossentry xml:id="orphanobjects">
531 <glossterm>Orphan objects </glossterm>
533 <para>Storage objects to which no Lustre file points. Orphan objects can arise from crashes
534 and are automatically removed by an <literal>llog</literal> recovery between the MDT and
535 OST. When a client deletes a file, the MDT unlinks it from the namespace. If this is the
536 last link, it will atomically add the OST objects into a per-OST <literal>llog</literal>
537 (if a crash has occurred) and then wait until the unlink commits to disk. (At this point,
538 it is safe to destroy the OST objects. Once the destroy is committed, the MDT
539 <literal>llog</literal> records can be cancelled.)</para>
542 <glossentry xml:id="osc">
543 <glossterm>OSC </glossterm>
545 <para>Object storage client. The client module communicating to an OST (via an OSS).</para>
548 <glossentry xml:id="osd">
549 <glossterm>OSD </glossterm>
551 <para>Object storage device. A generic, industry term for storage devices with a more
552 extended interface than block-oriented devices such as disks. For the Lustre file system,
553 this name is used to describe a software module that implements an object storage API in
554 the kernel. It is also used to refer to an instance of an object storage device created by
555 that driver. The OSD device is layered on a file system, with methods that mimic create,
556 destroy and I/O operations on file inodes.</para>
559 <glossentry xml:id="oss">
560 <glossterm>OSS </glossterm>
562 <para>Object storage server. A server OBD that provides access to local OSTs.</para>
565 <glossentry xml:id="ost">
566 <glossterm>OST </glossterm>
568 <para>Object storage target. An OSD made accessible through a network protocol. Typically,
569 an OST is associated with a unique OSD which, in turn is associated with a formatted disk
570 file system on the server containing the data objects.</para>
576 <glossentry xml:id="pdirops">
577 <glossterm>pdirops </glossterm>
579 <para>A locking protocol in the Linux VFS layer that allows for directory operations
580 performed in parallel.</para>
583 <glossentry xml:id="pool">
584 <glossterm>Pool </glossterm>
586 <para>OST pools allows the administrator to associate a name with an arbitrary subset of
587 OSTs in a Lustre cluster. A group of OSTs can be combined into a named pool with unique
588 access permissions and stripe characteristics.</para>
591 <glossentry xml:id="portal">
592 <glossterm>Portal </glossterm>
594 <para>A service address on an LNET NID that binds requests to a specific request service,
595 such as an MDS, MGS, OSS, or LDLM. Services may listen on multiple portals to ensure that
596 high priority messages are not queued behind many slow requests on another portal.</para>
599 <glossentry xml:id="ptlrpc">
603 <para>An RPC protocol layered on LNET. This protocol deals with stateful servers and has exactly-once semantics and built in support for recovery.</para>
609 <glossentry xml:id="recovery">
613 <para>The process that re-establishes the connection state when a client that was previously connected to a server reconnects after the server restarts.</para>
616 <glossentry xml:id="replay">
617 <glossterm>Replay request</glossterm>
619 <para>The concept of re-executing a server request after the server has lost information in
620 its memory caches and shut down. The replay requests are retained by clients until the
621 server(s) have confirmed that the data is persistent on disk. Only requests for which a
622 client received a reply and were assigned a transaction number by the server are replayed.
623 Requests that did not get a reply are resent.</para>
626 <glossentry xml:id="resent">
627 <glossterm>Resent request </glossterm>
629 <para>An RPC request sent from a client to a server that has not had a reply from the
630 server. This might happen if the request was lost on the way to the server, if the reply
631 was lost on the way back from the server, or if the server crashes before or after
632 processing the request. During server RPC recovery processing, resent requests are
633 processed after replayed requests, and use the client RPC XID to determine if the resent
634 RPC request was already executed on the server. </para>
637 <glossentry xml:id="revocation">
638 <glossterm>Revocation callback </glossterm>
640 <para>Also called a "blocking callback". An RPC request made by the lock server (typically
641 for an OST or MDT) to a lock client to revoke a granted DLM lock.</para>
644 <glossentry xml:id="rootsquash">
645 <glossterm>Root squash
648 <para>A mechanism whereby the identity of a root user on a client system is mapped to a
649 different identity on the server to avoid root users on clients from accessing or
650 modifying root-owned files on the servers. This does not prevent root users on the client
651 from assuming the identity of a non-root user, so should not be considered a robust
652 security mechanism. Typically, for management purposes, at least one client system should
653 not be subject to root squash.</para>
656 <glossentry xml:id="routing">
657 <glossterm>Routing </glossterm>
659 <para>LNET routing between different networks and LNDs.</para>
662 <glossentry xml:id="rpc">
666 <para>Remote procedure call. A network encoding of a request.</para>
672 <glossentry xml:id="stride">
673 <glossterm>Stripe </glossterm>
675 <para>A contiguous, logical extent of a Lustre file written to a single OST. Used
676 synonymously with a single OST data object that makes up part of a file visible to user
680 <glossentry xml:id="stridesize">
681 <glossterm>Stripe size </glossterm>
683 <para>The maximum number of bytes that will be written to an OST object before the next
684 object in a file's layout is used when writing sequential data to a file. Once a full
685 stripe has been written to each of the objects in the layout, the first object will be
686 written to again in round-robin fashion.</para>
689 <glossentry xml:id="stripcount">
690 <glossterm>Stripe count
693 <para>The number of OSTs holding objects for a RAID0-striped Lustre file.</para>
696 <glossentry xml:id="stripingmetadata">
697 <glossterm>Striping metadata
700 <para>The extended attribute associated with a file that describes how its data is
701 distributed over storage objects. See also <emphasis role="italic">Default stripe
702 pattern</emphasis>.</para>
708 <glossentry xml:id="t10">
709 <glossterm>T10 object protocol
712 <para>An object storage protocol tied to the SCSI transport layer. The Lustre file system
713 does not use T10.</para>
719 <glossentry xml:id="widestriping">
720 <glossterm>Wide striping
723 <para>Strategy of using many OSTs to store stripes of a single file. This obtains maximum
724 bandwidth access to a single file through parallel utilization of many OSTs. For more
725 information about wide striping, see <xref xmlns:xlink="http://www.w3.org/1999/xlink"
726 linkend="section_syy_gcl_qk"/>.</para>