-<?xml version="1.0" encoding="UTF-8"?>
-<glossary xmlns="http://docbook.org/ns/docbook" xmlns:xl="http://www.w3.org/1999/xlink" version="5.0" xml:lang="en-US">
+<?xml version="1.0" encoding="utf-8"?>
+<glossary xmlns="http://docbook.org/ns/docbook"
+xmlns:xl="http://www.w3.org/1999/xlink" version="5.0" xml:lang="en-US">
<title>Glossary</title>
<glossdiv>
<title>A</title>
<glossentry xml:id="acl">
<glossterm>ACL</glossterm>
<glossdef>
- <para>Access control list. An extended attribute associated with a file that contains
- enhanced authorization directives.</para>
+ <para>Access control list. An extended attribute associated with a file
+ that contains enhanced authorization directives.</para>
</glossdef>
</glossentry>
<glossentry xml:id="ostfail">
<glossterm>Administrative OST failure</glossterm>
<glossdef>
- <para>A manual configuration change to mark an OST as unavailable, so that operations
- intended for that OST fail immediately with an I/O error instead of waiting indefinitely
- for OST recovery to complete</para>
+ <para>A manual configuration change to mark an OST as unavailable, so
+ that operations intended for that OST fail immediately with an I/O
+ error instead of waiting indefinitely for OST recovery to
+ complete</para>
</glossdef>
</glossentry>
</glossdiv>
<glossdiv>
<title>C</title>
<glossentry xml:id="completioncallback">
- <glossterm>Completion callback </glossterm>
+ <glossterm>Completion callback</glossterm>
<glossdef>
- <para>An RPC made by the lock server on an OST or MDT to another system, usually a client,
- to indicate that the lock is now granted.</para>
+ <para>An RPC made by the lock server on an OST or MDT to another
+ system, usually a client, to indicate that the lock is now
+ granted.</para>
</glossdef>
</glossentry>
<glossentry xml:id="changelog">
- <glossterm>configlog </glossterm>
+ <glossterm>configlog</glossterm>
<glossdef>
- <para>An llog file used in a node, or retrieved from a management server over the network
- with configuration instructions for the Lustre file system at startup time.</para>
+ <para>An llog file used in a node, or retrieved from a management
+ server over the network with configuration instructions for the Lustre
+ file system at startup time.</para>
</glossdef>
</glossentry>
<glossentry xml:id="configlock">
- <glossterm>Configuration lock </glossterm>
+ <glossterm>Configuration lock</glossterm>
<glossdef>
- <para>A lock held by every node in the cluster to control configuration changes. When the
- configuration is changed on the MGS, it revokes this lock from all nodes. When the nodes
- receive the blocking callback, they quiesce their traffic, cancel and re-enqueue the lock
- and wait until it is granted again. They can then fetch the configuration updates and
- resume normal operation.</para>
+ <para>A lock held by every node in the cluster to control configuration
+ changes. When the configuration is changed on the MGS, it revokes this
+ lock from all nodes. When the nodes receive the blocking callback, they
+ quiesce their traffic, cancel and re-enqueue the lock and wait until it
+ is granted again. They can then fetch the configuration updates and
+ resume normal operation.</para>
</glossdef>
</glossentry>
</glossdiv>
<glossdiv>
<title>D</title>
<glossentry xml:id="defaultstrippattern">
- <glossterm>Default stripe pattern </glossterm>
+ <glossterm>Default stripe pattern</glossterm>
<glossdef>
- <para>Information in the LOV descriptor that describes the default stripe count, stripe
- size, and layout pattern used for new files in a file system. This can be amended by using
- a directory stripe descriptor or a per-file stripe descriptor.</para>
+ <para>Information in the LOV descriptor that describes the default
+ stripe count, stripe size, and layout pattern used for new files in a
+ file system. This can be amended by using a directory stripe descriptor
+ or a per-file stripe descriptor.</para>
</glossdef>
</glossentry>
<glossentry xml:id="directio">
- <glossterm>Direct I/O </glossterm>
+ <glossterm>Direct I/O</glossterm>
<glossdef>
- <para>A mechanism that can be used during read and write system calls to avoid memory cache
- overhead for large I/O requests. It bypasses the data copy between application and kernel
- memory, and avoids buffering the data in the client memory.</para>
+ <para>A mechanism that can be used during read and write system calls
+ to avoid memory cache overhead for large I/O requests. It bypasses the
+ data copy between application and kernel memory, and avoids buffering
+ the data in the client memory.</para>
</glossdef>
</glossentry>
<glossentry xml:id="dirstripdesc">
- <glossterm>Directory stripe descriptor </glossterm>
+ <glossterm>Directory stripe descriptor</glossterm>
<glossdef>
- <para>An extended attribute that describes the default stripe pattern for new files created
- within that directory. This is also inherited by new subdirectories at the time they are
- created.</para>
+ <para>An extended attribute that describes the default stripe pattern
+ for new files created within that directory. This is also inherited by
+ new subdirectories at the time they are created.</para>
</glossdef>
</glossentry>
- <glossentry xml:id="DNE">
+ <glossentry xml:id="DNE" condition='l24'>
<glossterm>Distributed namespace (DNE)</glossterm>
<glossdef>
- <para>A collection of metadata targets implementing a single file system namespace.</para>
+ <para>A collection of metadata targets serving a single file
+ system namespace. Prior to DNE, Lustre file systems were limited to a
+ single metadata target for the entire name space. Without the ability
+ to distribute metadata load over multiple targets, Lustre file system
+ performance is limited. Lustre was enhanced with DNE functionality in
+ two development phases. After completing the first phase of development
+ in Lustre software version 2.4, <emphasis>Remote Directories</emphasis>
+ allows the metadata for sub-directories to be serviced by an
+ independent MDT(s). After completing the second phase of development in
+ Lustre software version 2.8, <emphasis>Striped Directories</emphasis>
+ allows files in a single directory to be serviced by multiple MDTs.
+ </para>
</glossdef>
</glossentry>
</glossdiv>
<glossdiv>
<title>E</title>
<glossentry xml:id="ea">
- <glossterm>EA
- </glossterm>
+ <glossterm>EA</glossterm>
<glossdef>
- <para>Extended attribute. A small amount of data that can be retrieved through a name (EA or
- attr) associated with a particular inode. A Lustre file system uses EAs to store striping
- information (indicating the location of file data on OSTs). Examples of extended
- attributes are ACLs, striping information, and the FID of the file.</para>
+ <para>Extended attribute. A small amount of data that can be retrieved
+ through a name (EA or attr) associated with a particular inode. A
+ Lustre file system uses EAs to store striping information (indicating
+ the location of file data on OSTs). Examples of extended attributes are
+ ACLs, striping information, and the FID of the file.</para>
</glossdef>
</glossentry>
<glossentry xml:id="eviction">
- <glossterm>Eviction
- </glossterm>
+ <glossterm>Eviction</glossterm>
<glossdef>
- <para>The process of removing a client's state from the server if the client is unresponsive
- to server requests after a timeout or if server recovery fails. If a client is still
- running, it is required to flush the cache associated with the server when it becomes
- aware that it has been evicted.</para>
+ <para>The process of removing a client's state from the server if the
+ client is unresponsive to server requests after a timeout or if server
+ recovery fails. If a client is still running, it is required to flush
+ the cache associated with the server when it becomes aware that it has
+ been evicted.</para>
</glossdef>
</glossentry>
<glossentry xml:id="export">
- <glossterm>Export
- </glossterm>
+ <glossterm>Export</glossterm>
<glossdef>
- <para>The state held by a server for a client that is sufficient to transparently recover all in-flight operations when a single failure occurs.</para>
+ <para>The state held by a server for a client that is sufficient to
+ transparently recover all in-flight operations when a single failure
+ occurs.</para>
</glossdef>
</glossentry>
<glossentry>
- <glossterm>Extent </glossterm>
+ <glossterm>Extent</glossterm>
<glossdef>
- <para>A range of contiguous bytes or blocks in a file that are addressed by a {start,
- length} tuple instead of individual block numbers.</para>
+ <para>A range of contiguous bytes or blocks in a file that are
+ addressed by a {start, length} tuple instead of individual block
+ numbers.</para>
</glossdef>
</glossentry>
<glossentry xml:id="extendloc">
- <glossterm>Extent lock </glossterm>
+ <glossterm>Extent lock</glossterm>
<glossdef>
- <para>An LDLM lock used by the OSC to protect an extent in a storage object for concurrent
- control of read/write, file size acquisition, and truncation operations.</para>
+ <para>An LDLM lock used by the OSC to protect an extent in a storage
+ object for concurrent control of read/write, file size acquisition, and
+ truncation operations.</para>
</glossdef>
</glossentry>
</glossdiv>
<glossdiv>
<title>F</title>
<glossentry xml:id="failback">
- <glossterm>Failback
- </glossterm>
+ <glossterm>Failback</glossterm>
<glossdef>
- <para> The failover process in which the default active server regains control from the
- backup server that had taken control of the service.</para>
+ <para>The failover process in which the default active server regains
+ control from the backup server that had taken control of the
+ service.</para>
</glossdef>
</glossentry>
<glossentry xml:id="failoutost">
- <glossterm>Failout OST
- </glossterm>
+ <glossterm>Failout OST</glossterm>
<glossdef>
- <para>An OST that is not expected to recover if it fails to answer client requests. A
- failout OST can be administratively failed, thereby enabling clients to return errors when
- accessing data on the failed OST without making additional network requests or waiting for
- OST recovery to complete.</para>
+ <para>An OST that is not expected to recover if it fails to answer
+ client requests. A failout OST can be administratively failed, thereby
+ enabling clients to return errors when accessing data on the failed OST
+ without making additional network requests or waiting for OST recovery
+ to complete.</para>
</glossdef>
</glossentry>
<glossentry xml:id="failover">
- <glossterm>Failover
- </glossterm>
+ <glossterm>Failover</glossterm>
<glossdef>
- <para>The process by which a standby computer server system takes over for an active computer server after a failure of the active node. Typically, the standby computer server gains exclusive access to a shared storage device between the two servers.</para>
+ <para>The process by which a standby computer server system takes over
+ for an active computer server after a failure of the active node.
+ Typically, the standby computer server gains exclusive access to a
+ shared storage device between the two servers.</para>
</glossdef>
</glossentry>
<glossentry xml:id="fid">
- <glossterm>FID
- </glossterm>
+ <glossterm>FID</glossterm>
<glossdef>
- <para> Lustre File Identifier. A 128-bit file system-unique identifier for a file or object
- in the file system. The FID structure contains a unique 64-bit sequence number (see
- <emphasis role="italic">FLDB</emphasis>), a 32-bit object ID (OID), and a 32-bit version
- number. The sequence number is unique across all Lustre targets (OSTs and MDTs).</para>
+ <para>Lustre File Identifier. A 128-bit file system-unique identifier
+ for a file or object in the file system. The FID structure contains a
+ unique 64-bit sequence number (see
+ <emphasis role="italic">FLDB</emphasis>), a 32-bit object ID (OID), and
+ a 32-bit version number. The sequence number is unique across all
+ Lustre targets (OSTs and MDTs).</para>
</glossdef>
</glossentry>
<glossentry xml:id="fileset">
- <glossterm>Fileset
- </glossterm>
+ <glossterm>Fileset</glossterm>
<glossdef>
- <para>A group of files that are defined through a directory that represents the start point
- of a file system.</para>
+ <para>A group of files that are defined through a directory that
+ represents the start point of a file system.</para>
</glossdef>
</glossentry>
<glossentry xml:id="fldb">
- <glossterm>FLDB
- </glossterm>
+ <glossterm>FLDB</glossterm>
<glossdef>
- <para>FID location database. This database maps a sequence of FIDs to a specific target (MDT
- or OST), which manages the objects within the sequence. The FLDB is cached by all clients
- and servers in the file system, but is typically only modified when new servers are added
- to the file system.</para>
+ <para>FID location database. This database maps a sequence of FIDs to a
+ specific target (MDT or OST), which manages the objects within the
+ sequence. The FLDB is cached by all clients and servers in the file
+ system, but is typically only modified when new servers are added to
+ the file system.</para>
</glossdef>
</glossentry>
<glossentry xml:id="flightgroup">
- <glossterm>Flight group </glossterm>
+ <glossterm>Flight group</glossterm>
<glossdef>
- <para>Group of I/O RPCs initiated by the OSC that are concurrently queued or processed at
- the OST. Increasing the number of RPCs in flight for high latency networks can increase
- throughput and reduce visible latency at the client.</para>
+ <para>Group of I/O RPCs initiated by the OSC that are concurrently
+ queued or processed at the OST. Increasing the number of RPCs in flight
+ for high latency networks can increase throughput and reduce visible
+ latency at the client.</para>
</glossdef>
</glossentry>
</glossdiv>
<glossdiv>
<title>G</title>
<glossentry xml:id="glimpsecallback">
- <glossterm>Glimpse callback </glossterm>
+ <glossterm>Glimpse callback</glossterm>
<glossdef>
- <para>An RPC made by an OST or MDT to another system (usually a client) to indicate that a
- held extent lock should be surrendered. If the system is using the lock, then the system
- should return the object size and timestamps in the reply to the glimpse callback instead
- of cancelling the lock. Glimpses are introduced to optimize the acquisition of file
- attributes without introducing contention on an active lock.</para>
+ <para>An RPC made by an OST or MDT to another system (usually a client)
+ to indicate that a held extent lock should be surrendered. If the
+ system is using the lock, then the system should return the object size
+ and timestamps in the reply to the glimpse callback instead of
+ cancelling the lock. Glimpses are introduced to optimize the
+ acquisition of file attributes without introducing contention on an
+ active lock.</para>
</glossdef>
</glossentry>
</glossdiv>
<glossdiv>
<title>I</title>
<glossentry xml:id="import">
- <glossterm>Import
- </glossterm>
+ <glossterm>Import</glossterm>
<glossdef>
- <para>The state held held by the client for each target that it is connected to. It holds
- server NIDs, connection state, and uncommitted RPCs needed to fully recover a transaction
- sequence after a server failure and restart.</para>
+ <para>The state held held by the client for each target that it is
+ connected to. It holds server NIDs, connection state, and uncommitted
+ RPCs needed to fully recover a transaction sequence after a server
+ failure and restart.</para>
</glossdef>
</glossentry>
<glossentry xml:id="intentlock">
- <glossterm>Intent lock </glossterm>
+ <glossterm>Intent lock</glossterm>
<glossdef>
- <para>A special Lustre file system locking operation in the Linux kernel. An intent lock
- combines a request for a lock with the full information to perform the operation(s) for
- which the lock was requested. This offers the server the option of granting the lock or
- performing the operation and informing the client of the operation result without granting
- a lock. The use of intent locks enables metadata operations (even complicated ones) to be
- implemented with a single RPC from the client to the server.</para>
+ <para>A special Lustre file system locking operation in the Linux
+ kernel. An intent lock combines a request for a lock with the full
+ information to perform the operation(s) for which the lock was
+ requested. This offers the server the option of granting the lock or
+ performing the operation and informing the client of the operation
+ result without granting a lock. The use of intent locks enables
+ metadata operations (even complicated ones) to be implemented with a
+ single RPC from the client to the server.</para>
</glossdef>
</glossentry>
</glossdiv>
<glossdiv>
<title>L</title>
<glossentry xml:id="lbug">
- <glossterm>LBUG
- </glossterm>
+ <glossterm>LBUG</glossterm>
<glossdef>
- <para>A fatal error condition detected by the software that halts execution of the kernel
- thread to avoid potential further corruption of the system state. It is printed to the
- console log and triggers a dump of the internal debug log. The system must be rebooted to
- clear this state.</para>
+ <para>A fatal error condition detected by the software that halts
+ execution of the kernel thread to avoid potential further corruption of
+ the system state. It is printed to the console log and triggers a dump
+ of the internal debug log. The system must be rebooted to clear this
+ state.</para>
</glossdef>
</glossentry>
<glossentry xml:id="ldlm">
- <glossterm>LDLM
- </glossterm>
+ <glossterm>LDLM</glossterm>
<glossdef>
<para>Lustre Distributed Lock Manager.</para>
</glossdef>
</glossentry>
<glossentry xml:id="lfs">
- <glossterm>lfs
- </glossterm>
+ <glossterm>lfs</glossterm>
<glossdef>
- <para>The Lustre File System command-line utility that allows end users to interact with
- Lustre-specific features, such as setting or checking file striping or per-target free
- space. For more details, see <xref xmlns:xlink="http://www.w3.org/1999/xlink"
- linkend="dbdoclet.50438206_94597"/>.</para>
+ <para>The Lustre file system command-line utility that allows end users
+ to interact with Lustre software features, such as setting or checking
+ file striping or per-target free space. For more details, see
+ <xref xmlns:xlink="http://www.w3.org/1999/xlink"
+ linkend="dbdoclet.50438206_94597" />.</para>
</glossdef>
</glossentry>
<glossentry xml:id="lfsck">
- <glossterm>lfsck
- </glossterm>
+ <glossterm>LFSCK</glossterm>
<glossdef>
- <para>Lustre file system check. A distributed version of a disk file system checker.
- Normally, <literal>lfsck</literal> does not need to be run, except when file systems are
- damaged by events such as multiple disk failures and cannot be recovered using file system
- journal recovery.</para>
+ <para>Lustre file system check. A distributed version of a disk file
+ system checker. Normally,
+ <literal>lfsck</literal> does not need to be run, except when file
+ systems are damaged by events such as multiple disk failures and cannot
+ be recovered using file system journal recovery.</para>
</glossdef>
</glossentry>
<glossentry xml:id="llite">
- <glossterm>llite </glossterm>
+ <glossterm>llite</glossterm>
<glossdef>
- <para>Lustre lite. This term is in use inside code and in module names for code that is
- related to the Linux client VFS interface.</para>
+ <para>Lustre lite. This term is in use inside code and in module names
+ for code that is related to the Linux client VFS interface.</para>
</glossdef>
</glossentry>
<glossentry xml:id="llog">
- <glossterm>llog </glossterm>
+ <glossterm>llog</glossterm>
<glossdef>
- <para>Lustre log. An efficient log data structure used internally by the file system for
- storing configuration and distributed transaction records. An <literal>llog</literal> is
- suitable for rapid transactional appends of records and cheap cancellation of records
- through a bitmap.</para>
+ <para>Lustre log. An efficient log data structure used internally by
+ the file system for storing configuration and distributed transaction
+ records. An
+ <literal>llog</literal> is suitable for rapid transactional appends of
+ records and cheap cancellation of records through a bitmap.</para>
</glossdef>
</glossentry>
<glossentry xml:id="llogcatalog">
- <glossterm>llog catalog </glossterm>
+ <glossterm>llog catalog</glossterm>
<glossdef>
- <para>Lustre log catalog. An <literal>llog</literal> with records that each point at an
- <literal>llog</literal>. Catalogs were introduced to give <literal>llogs</literal>
- increased scalability. <literal>llogs</literal> have an originator which writes records
- and a replicator which cancels records when the records are no longer needed.</para>
+ <para>Lustre log catalog. An
+ <literal>llog</literal> with records that each point at an
+ <literal>llog</literal>. Catalogs were introduced to give
+ <literal>llogs</literal> increased scalability.
+ <literal>llogs</literal> have an originator which writes records and a
+ replicator which cancels records when the records are no longer
+ needed.</para>
</glossdef>
</glossentry>
<glossentry xml:id="lmv">
- <glossterm>LMV
- </glossterm>
+ <glossterm>LMV</glossterm>
<glossdef>
- <para>Logical metadata volume. A module that implements a DNE client-side abstraction
- device. It allows a client to work with many MDTs without changes to the llite module. The
- LMV code forwards requests to the correct MDT based on name or directory striping
- information and merges replies into a single result to pass back to the higher
- <literal>llite</literal> layer that connects the Lustre file system with Linux VFS,
- supports VFS semantics, and complies with POSIX interface specifications.</para>
+ <para>Logical metadata volume. A module that implements a DNE
+ client-side abstraction device. It allows a client to work with many
+ MDTs without changes to the llite module. The LMV code forwards
+ requests to the correct MDT based on name or directory striping
+ information and merges replies into a single result to pass back to the
+ higher
+ <literal>llite</literal> layer that connects the Lustre file system with
+ Linux VFS, supports VFS semantics, and complies with POSIX interface
+ specifications.</para>
</glossdef>
</glossentry>
<glossentry xml:id="lnd">
- <glossterm>LND
- </glossterm>
+ <glossterm>LND</glossterm>
<glossdef>
- <para>Lustre network driver. A code module that enables LNET support over particular
- transports, such as TCP and various kinds of InfiniBand networks.</para>
+ <para>Lustre network driver. A code module that enables LNet support
+ over particular transports, such as TCP and various kinds of InfiniBand
+ networks.</para>
</glossdef>
</glossentry>
<glossentry xml:id="lnet">
- <glossterm>LNET
- </glossterm>
+ <glossterm>LNet</glossterm>
<glossdef>
- <para>Lustre Networking. A message passing network protocol capable of running and routing through various physical layers. LNET forms the underpinning of LNETrpc.</para>
+ <para>Lustre networking. A message passing network protocol capable of
+ running and routing through various physical layers. LNet forms the
+ underpinning of LNETrpc.</para>
</glossdef>
</glossentry>
<glossentry xml:id="lockclient">
- <glossterm>Lock client </glossterm>
+ <glossterm>Lock client</glossterm>
<glossdef>
- <para>A module that makes lock RPCs to a lock server and handles revocations from the
- server.</para>
+ <para>A module that makes lock RPCs to a lock server and handles
+ revocations from the server.</para>
</glossdef>
</glossentry>
<glossentry xml:id="lockserver">
- <glossterm>Lock server </glossterm>
+ <glossterm>Lock server</glossterm>
<glossdef>
- <para>A service that is co-located with a storage target that manages locks on certain
- objects. It also issues lock callback requests, calls while servicing or, for objects that
- are already locked, completes lock requests.</para>
+ <para>A service that is co-located with a storage target that manages
+ locks on certain objects. It also issues lock callback requests, calls
+ while servicing or, for objects that are already locked, completes lock
+ requests.</para>
</glossdef>
</glossentry>
<glossentry xml:id="lov">
- <glossterm>LOV
- </glossterm>
+ <glossterm>LOV</glossterm>
<glossdef>
- <para>Logical object volume. The object storage analog of a logical volume in a block device
- volume management system, such as LVM or EVMS. The LOV is primarily used to present a
- collection of OSTs as a single device to the MDT and client file system drivers.</para>
+ <para>Logical object volume. The object storage analog of a logical
+ volume in a block device volume management system, such as LVM or EVMS.
+ The LOV is primarily used to present a collection of OSTs as a single
+ device to the MDT and client file system drivers.</para>
</glossdef>
</glossentry>
<glossentry xml:id="lovdes">
- <glossterm>LOV descriptor
- </glossterm>
+ <glossterm>LOV descriptor</glossterm>
<glossdef>
- <para>A set of configuration directives which describes which nodes are OSS systems in the
- Lustre cluster and providing names for their OSTs.</para>
+ <para>A set of configuration directives which describes which nodes are
+ OSS systems in the Lustre cluster and providing names for their
+ OSTs.</para>
</glossdef>
</glossentry>
<glossentry xml:id="lustreclient">
- <glossterm>Lustre client
- </glossterm>
+ <glossterm>Lustre client</glossterm>
<glossdef>
<para>An operating instance with a mounted Lustre file system.</para>
</glossdef>
</glossentry>
<glossentry xml:id="lustrefile">
- <glossterm>Lustre file
- </glossterm>
+ <glossterm>Lustre file</glossterm>
<glossdef>
- <para>A file in the Lustre file system. The implementation of a Lustre file is through an
- inode on a metadata server that contains references to a storage object on OSSs.</para>
+ <para>A file in the Lustre file system. The implementation of a Lustre
+ file is through an inode on a metadata server that contains references
+ to a storage object on OSSs.</para>
</glossdef>
</glossentry>
</glossdiv>
<glossdiv>
<title>M</title>
<glossentry xml:id="mballoc">
- <glossterm>mballoc </glossterm>
+ <glossterm>mballoc</glossterm>
<glossdef>
- <para>Multi-block allocate. Functionality in ext4 that enables the
- <literal>ldiskfs</literal> file system to allocate multiple blocks with a single request
- to the block allocator. </para>
+ <para>Multi-block allocate. Functionality in ext4 that enables the
+ <literal>ldiskfs</literal> file system to allocate multiple blocks with
+ a single request to the block allocator.</para>
</glossdef>
</glossentry>
<glossentry xml:id="mdc">
- <glossterm>MDC
- </glossterm>
+ <glossterm>MDC</glossterm>
<glossdef>
- <para>Metadata client. A Lustre client component that sends metadata requests via RPC over
- LNET to the metadata target (MDT).</para>
+ <para>Metadata client. A Lustre client component that sends metadata
+ requests via RPC over LNet to the metadata target (MDT).</para>
</glossdef>
</glossentry>
<glossentry xml:id="mdd">
- <glossterm>MDD
- </glossterm>
+ <glossterm>MDD</glossterm>
<glossdef>
- <para>Metadata disk device. Lustre server component that interfaces with the underlying
- object storage device to manage the Lustre file system namespace (directories, file
- ownership, attributes).</para>
+ <para>Metadata disk device. Lustre server component that interfaces
+ with the underlying object storage device to manage the Lustre file
+ system namespace (directories, file ownership, attributes).</para>
</glossdef>
</glossentry>
<glossentry xml:id="mds">
- <glossterm>MDS
- </glossterm>
+ <glossterm>MDS</glossterm>
<glossdef>
- <para>Metadata server. The server node that is hosting the metadata target (MDT).</para>
+ <para>Metadata server. The server node that is hosting the metadata
+ target (MDT).</para>
</glossdef>
</glossentry>
<glossentry xml:id="mdt">
- <glossterm>MDT
- </glossterm>
+ <glossterm>MDT</glossterm>
<glossdef>
- <para>Metadata target. A storage device containing the file system namespace that is made
- available over the network to a client. It stores filenames, attributes, and the layout of
- OST objects that store the file data.</para>
+ <para>Metadata target. A storage device containing the file system
+ namespace that is made available over the network to a client. It
+ stores filenames, attributes, and the layout of OST objects that store
+ the file data.</para>
</glossdef>
</glossentry>
<glossentry xml:id="mdt0" condition='l24'>
- <glossterm>MDT0
- </glossterm>
+ <glossterm>MDT0</glossterm>
<glossdef>
- <para>The metadata target for the file system root. Since Lustre release 2.4, multiple
- metadata targets are possible in the same file system. MDT0 is the root of the file
- system, which must be available for the file system to be accessible.</para>
+ <para>The metadata target for the file system root. Since Lustre
+ software release 2.4, multiple metadata targets are possible in the
+ same file system. MDT0 is the root of the file system, which must be
+ available for the file system to be accessible.</para>
</glossdef>
</glossentry>
<glossentry xml:id="mgs">
- <glossterm>MGS
- </glossterm>
+ <glossterm>MGS</glossterm>
<glossdef>
- <para>Management service. A software module that manages the startup configuration and
- changes to the configuration. Also, the server node on which this system runs.</para>
+ <para>Management service. A software module that manages the startup
+ configuration and changes to the configuration. Also, the server node
+ on which this system runs.</para>
</glossdef>
</glossentry>
<glossentry xml:id="mountconf">
- <glossterm>mountconf </glossterm>
+ <glossterm>mountconf</glossterm>
<glossdef>
- <para>The Lustre configuration protocol that formats disk file systems on servers with the
- <literal>mkfs.lustre</literal> program, and prepares them for automatic incorporation
- into a Lustre cluster. This allows clients to be configured and mounted with a simple
- <literal>mount</literal> command.</para>
+ <para>The Lustre configuration protocol that formats disk file systems
+ on servers with the
+ <literal>mkfs.lustre</literal> program, and prepares them for automatic
+ incorporation into a Lustre cluster. This allows clients to be
+ configured and mounted with a simple
+ <literal>mount</literal> command.</para>
</glossdef>
</glossentry>
</glossdiv>
<glossdiv>
<title>N</title>
<glossentry xml:id="nid">
- <glossterm>NID
- </glossterm>
+ <glossterm>NID</glossterm>
<glossdef>
- <para>Network identifier. Encodes the type, network number, and network address of a network
- interface on a node for use by the Lustre file system.</para>
+ <para>Network identifier. Encodes the type, network number, and network
+ address of a network interface on a node for use by the Lustre file
+ system.</para>
</glossdef>
</glossentry>
<glossentry xml:id="nioapi">
- <glossterm>NIO API
- </glossterm>
+ <glossterm>NIO API</glossterm>
<glossdef>
- <para>A subset of the LNET RPC module that implements a library for sending large network requests, moving buffers with RDMA.</para>
+ <para>A subset of the LNet RPC module that implements a library for
+ sending large network requests, moving buffers with RDMA.</para>
</glossdef>
</glossentry>
<glossentry xml:id="nodeaffdef">
- <glossterm>Node affinity </glossterm>
+ <glossterm>Node affinity</glossterm>
<glossdef>
- <para>Node affinity describes the property of a multi-threaded application to behave
- sensibly on multiple cores. Without the property of node affinity, an operating scheduler
- may move application threads across processors in a sub-optimal way that significantly
- reduces performance of the application overall.</para>
+ <para>Node affinity describes the property of a multi-threaded
+ application to behave sensibly on multiple cores. Without the property
+ of node affinity, an operating scheduler may move application threads
+ across processors in a sub-optimal way that significantly reduces
+ performance of the application overall.</para>
</glossdef>
</glossentry>
<glossentry xml:id="nrs">
- <glossterm>NRS
- </glossterm>
+ <glossterm>NRS</glossterm>
<glossdef>
- <para>Network request scheduler. A subcomponent of the PTLRPC layer, which specifies the
- order in which RPCs are handled at servers. This allows optimizing large numbers of
- incoming requests for disk access patterns, fairness between clients, and other
- administrator-selected policies.</para>
+ <para>Network request scheduler. A subcomponent of the PTLRPC layer,
+ which specifies the order in which RPCs are handled at servers. This
+ allows optimizing large numbers of incoming requests for disk access
+ patterns, fairness between clients, and other administrator-selected
+ policies.</para>
</glossdef>
</glossentry>
<glossentry xml:id="NUMAdef">
- <glossterm>NUMA
- </glossterm>
+ <glossterm>NUMA</glossterm>
<glossdef>
- <para>Non-uniform memory access. Describes a multi-processing architecture where the time
- taken to access given memory differs depending on memory location relative to a given
- processor. Typically machines with multiple sockets are NUMA architectures.</para>
+ <para>Non-uniform memory access. Describes a multi-processing
+ architecture where the time taken to access given memory differs
+ depending on memory location relative to a given processor. Typically
+ machines with multiple sockets are NUMA architectures.</para>
</glossdef>
</glossentry>
</glossdiv>
<glossdiv>
<title>O</title>
<glossentry xml:id="odb">
- <glossterm>OBD </glossterm>
+ <glossterm>OBD</glossterm>
<glossdef>
- <para>Object-based device. The generic term for components in the Lustre software stack that
- can be configured on the client or server. Examples include MDC, OSC, LOV, MDT, and
- OST.</para>
+ <para>Object-based device. The generic term for components in the
+ Lustre software stack that can be configured on the client or server.
+ Examples include MDC, OSC, LOV, MDT, and OST.</para>
</glossdef>
</glossentry>
<glossentry xml:id="odbapi">
- <glossterm>OBD API </glossterm>
+ <glossterm>OBD API</glossterm>
<glossdef>
- <para>The programming interface for configuring OBD devices. This was formerly also the API
- for accessing object IO and attribute methods on both the client and server, but has been
- replaced by the OSD API in most parts of the code.</para>
+ <para>The programming interface for configuring OBD devices. This was
+ formerly also the API for accessing object IO and attribute methods on
+ both the client and server, but has been replaced by the OSD API in
+ most parts of the code.</para>
</glossdef>
</glossentry>
<glossentry xml:id="odbtype">
- <glossterm>OBD type </glossterm>
+ <glossterm>OBD type</glossterm>
<glossdef>
- <para>Module that can implement the Lustre object or metadata APIs. Examples of OBD types
- include the LOV, OSC and OSD.</para>
+ <para>Module that can implement the Lustre object or metadata APIs.
+ Examples of OBD types include the LOV, OSC and OSD.</para>
</glossdef>
</glossentry>
<glossentry xml:id="odbfilter">
- <glossterm>Obdfilter </glossterm>
+ <glossterm>Obdfilter</glossterm>
<glossdef>
- <para>An older name for the OBD API data object operation device driver that sits between
- the OST and the OSD. In Lustre 2.4 this device has been renamed OFD."</para>
+ <para>An older name for the OBD API data object operation device driver
+ that sits between the OST and the OSD. In Lustre software release 2.4
+ this device has been renamed OFD."</para>
</glossdef>
</glossentry>
<glossentry xml:id="objectstorage">
- <glossterm>Object storage </glossterm>
+ <glossterm>Object storage</glossterm>
<glossdef>
- <para>Refers to a storage-device API or protocol involving storage objects. The two most
- well known instances of object storage are the T10 iSCSI storage object protocol and the
- Lustre object storage protocol (a network implementation of the Lustre object API). The
- principal difference between the Lustre and T10 protocols is that the Lustre protocol
- includes locking and recovery control in the protocol and is not tied to a SCSI transport
- layer.</para>
+ <para>Refers to a storage-device API or protocol involving storage
+ objects. The two most well known instances of object storage are the
+ T10 iSCSI storage object protocol and the Lustre object storage
+ protocol (a network implementation of the Lustre object API). The
+ principal difference between the Lustre protocol and T10 protocol is
+ that the Lustre protocol includes locking and recovery control in the
+ protocol and is not tied to a SCSI transport layer.</para>
</glossdef>
</glossentry>
<glossentry xml:id="opencache">
- <glossterm>opencache </glossterm>
+ <glossterm>opencache</glossterm>
<glossdef>
- <para>A cache of open file handles. This is a performance enhancement for NFS.</para>
+ <para>A cache of open file handles. This is a performance enhancement
+ for NFS.</para>
</glossdef>
</glossentry>
<glossentry xml:id="orphanobjects">
- <glossterm>Orphan objects </glossterm>
+ <glossterm>Orphan objects</glossterm>
<glossdef>
- <para>Storage objects to which no Lustre file points. Orphan objects can arise from crashes
- and are automatically removed by an <literal>llog</literal> recovery between the MDT and
- OST. When a client deletes a file, the MDT unlinks it from the namespace. If this is the
- last link, it will atomically add the OST objects into a per-OST <literal>llog</literal>
- (if a crash has occurred) and then wait until the unlink commits to disk. (At this point,
- it is safe to destroy the OST objects. Once the destroy is committed, the MDT
- <literal>llog</literal> records can be cancelled.)</para>
+ <para>Storage objects to which no Lustre file points. Orphan objects
+ can arise from crashes and are automatically removed by an
+ <literal>llog</literal> recovery between the MDT and OST. When a client
+ deletes a file, the MDT unlinks it from the namespace. If this is the
+ last link, it will atomically add the OST objects into a per-OST
+ <literal>llog</literal>(if a crash has occurred) and then wait until
+ the unlink commits to disk. (At this point, it is safe to destroy the
+ OST objects. Once the destroy is committed, the MDT
+ <literal>llog</literal> records can be cancelled.)</para>
</glossdef>
</glossentry>
<glossentry xml:id="osc">
- <glossterm>OSC </glossterm>
+ <glossterm>OSC</glossterm>
<glossdef>
- <para>Object storage client. The client module communicating to an OST (via an OSS).</para>
+ <para>Object storage client. The client module communicating to an OST
+ (via an OSS).</para>
</glossdef>
</glossentry>
<glossentry xml:id="osd">
- <glossterm>OSD </glossterm>
+ <glossterm>OSD</glossterm>
<glossdef>
- <para>Object storage device. A generic, industry term for storage devices with a more
- extended interface than block-oriented devices such as disks. For the Lustre file system,
- this name is used to describe a software module that implements an object storage API in
- the kernel. It is also used to refer to an instance of an object storage device created by
- that driver. The OSD device is layered on a file system, with methods that mimic create,
- destroy and I/O operations on file inodes.</para>
+ <para>Object storage device. A generic, industry term for storage
+ devices with a more extended interface than block-oriented devices such
+ as disks. For the Lustre file system, this name is used to describe a
+ software module that implements an object storage API in the kernel. It
+ is also used to refer to an instance of an object storage device
+ created by that driver. The OSD device is layered on a file system,
+ with methods that mimic create, destroy and I/O operations on file
+ inodes.</para>
</glossdef>
</glossentry>
<glossentry xml:id="oss">
- <glossterm>OSS </glossterm>
+ <glossterm>OSS</glossterm>
<glossdef>
- <para>Object storage server. A server OBD that provides access to local OSTs.</para>
+ <para>Object storage server. A server OBD that provides access to local
+ OSTs.</para>
</glossdef>
</glossentry>
<glossentry xml:id="ost">
- <glossterm>OST </glossterm>
+ <glossterm>OST</glossterm>
<glossdef>
- <para>Object storage target. An OSD made accessible through a network protocol. Typically,
- an OST is associated with a unique OSD which, in turn is associated with a formatted disk
- file system on the server containing the data objects.</para>
+ <para>Object storage target. An OSD made accessible through a network
+ protocol. Typically, an OST is associated with a unique OSD which, in
+ turn is associated with a formatted disk file system on the server
+ containing the data objects.</para>
</glossdef>
</glossentry>
</glossdiv>
<glossdiv>
<title>P</title>
<glossentry xml:id="pdirops">
- <glossterm>pdirops </glossterm>
+ <glossterm>pdirops</glossterm>
<glossdef>
- <para>A locking protocol in the Linux VFS layer that allows for directory operations
- performed in parallel.</para>
+ <para>A locking protocol in the Linux VFS layer that allows for
+ directory operations performed in parallel.</para>
</glossdef>
</glossentry>
<glossentry xml:id="pool">
- <glossterm>Pool </glossterm>
+ <glossterm>Pool</glossterm>
<glossdef>
- <para>OST pools allows the administrator to associate a name with an arbitrary subset of
- OSTs in a Lustre cluster. A group of OSTs can be combined into a named pool with unique
- access permissions and stripe characteristics.</para>
+ <para>OST pools allows the administrator to associate a name with an
+ arbitrary subset of OSTs in a Lustre cluster. A group of OSTs can be
+ combined into a named pool with unique access permissions and stripe
+ characteristics.</para>
</glossdef>
</glossentry>
<glossentry xml:id="portal">
- <glossterm>Portal </glossterm>
+ <glossterm>Portal</glossterm>
<glossdef>
- <para>A service address on an LNET NID that binds requests to a specific request service,
- such as an MDS, MGS, OSS, or LDLM. Services may listen on multiple portals to ensure that
- high priority messages are not queued behind many slow requests on another portal.</para>
+ <para>A service address on an LNet NID that binds requests to a
+ specific request service, such as an MDS, MGS, OSS, or LDLM. Services
+ may listen on multiple portals to ensure that high priority messages
+ are not queued behind many slow requests on another portal.</para>
</glossdef>
</glossentry>
<glossentry xml:id="ptlrpc">
- <glossterm>PTLRPC
- </glossterm>
+ <glossterm>PTLRPC</glossterm>
<glossdef>
- <para>An RPC protocol layered on LNET. This protocol deals with stateful servers and has exactly-once semantics and built in support for recovery.</para>
+ <para>An RPC protocol layered on LNet. This protocol deals with
+ stateful servers and has exactly-once semantics and built in support
+ for recovery.</para>
</glossdef>
</glossentry>
</glossdiv>
<glossdiv>
<title>R</title>
<glossentry xml:id="recovery">
- <glossterm>Recovery
- </glossterm>
+ <glossterm>Recovery</glossterm>
<glossdef>
- <para>The process that re-establishes the connection state when a client that was previously connected to a server reconnects after the server restarts.</para>
+ <para>The process that re-establishes the connection state when a
+ client that was previously connected to a server reconnects after the
+ server restarts.</para>
+ </glossdef>
+ </glossentry>
+ <glossentry xml:id="remotedirectories" condition='l24'>
+ <glossterm>Remote directory</glossterm>
+ <glossdef>
+ <para>A remote directory describes a feature of
+ Lustre where metadata for files in a given directory may be
+ stored on a different MDT than the metadata for the parent
+ directory. Remote directories only became possible with the
+ advent of DNE Phase 1, which arrived in Lustre version
+ 2.4. Remote directories are available to system
+ administrators who wish to provide individual metadata
+ targets for individual workloads.</para>
</glossdef>
</glossentry>
<glossentry xml:id="replay">
<glossterm>Replay request</glossterm>
<glossdef>
- <para>The concept of re-executing a server request after the server has lost information in
- its memory caches and shut down. The replay requests are retained by clients until the
- server(s) have confirmed that the data is persistent on disk. Only requests for which a
- client received a reply and were assigned a transaction number by the server are replayed.
- Requests that did not get a reply are resent.</para>
+ <para>The concept of re-executing a server request after the server has
+ lost information in its memory caches and shut down. The replay
+ requests are retained by clients until the server(s) have confirmed
+ that the data is persistent on disk. Only requests for which a client
+ received a reply and were assigned a transaction number by the server
+ are replayed. Requests that did not get a reply are resent.</para>
</glossdef>
</glossentry>
<glossentry xml:id="resent">
- <glossterm>Resent request </glossterm>
+ <glossterm>Resent request</glossterm>
<glossdef>
- <para>An RPC request sent from a client to a server that has not had a reply from the
- server. This might happen if the request was lost on the way to the server, if the reply
- was lost on the way back from the server, or if the server crashes before or after
- processing the request. During server RPC recovery processing, resent requests are
- processed after replayed requests, and use the client RPC XID to determine if the resent
- RPC request was already executed on the server. </para>
+ <para>An RPC request sent from a client to a server that has not had a
+ reply from the server. This might happen if the request was lost on the
+ way to the server, if the reply was lost on the way back from the
+ server, or if the server crashes before or after processing the
+ request. During server RPC recovery processing, resent requests are
+ processed after replayed requests, and use the client RPC XID to
+ determine if the resent RPC request was already executed on the
+ server.</para>
</glossdef>
</glossentry>
<glossentry xml:id="revocation">
- <glossterm>Revocation callback </glossterm>
+ <glossterm>Revocation callback</glossterm>
<glossdef>
- <para>Also called a "blocking callback". An RPC request made by the lock server (typically
- for an OST or MDT) to a lock client to revoke a granted DLM lock.</para>
+ <para>Also called a "blocking callback". An RPC request made by the
+ lock server (typically for an OST or MDT) to a lock client to revoke a
+ granted DLM lock.</para>
</glossdef>
</glossentry>
<glossentry xml:id="rootsquash">
- <glossterm>Root squash
- </glossterm>
+ <glossterm>Root squash</glossterm>
<glossdef>
- <para>A mechanism whereby the identity of a root user on a client system is mapped to a
- different identity on the server to avoid root users on clients from accessing or
- modifying root-owned files on the servers. This does not prevent root users on the client
- from assuming the identity of a non-root user, so should not be considered a robust
- security mechanism. Typically, for management purposes, at least one client system should
- not be subject to root squash.</para>
+ <para>A mechanism whereby the identity of a root user on a client
+ system is mapped to a different identity on the server to avoid root
+ users on clients from accessing or modifying root-owned files on the
+ servers. This does not prevent root users on the client from assuming
+ the identity of a non-root user, so should not be considered a robust
+ security mechanism. Typically, for management purposes, at least one
+ client system should not be subject to root squash.</para>
</glossdef>
</glossentry>
<glossentry xml:id="routing">
- <glossterm>Routing </glossterm>
+ <glossterm>Routing</glossterm>
<glossdef>
- <para>LNET routing between different networks and LNDs.</para>
+ <para>LNet routing between different networks and LNDs.</para>
</glossdef>
</glossentry>
<glossentry xml:id="rpc">
- <glossterm>RPC
- </glossterm>
+ <glossterm>RPC</glossterm>
<glossdef>
<para>Remote procedure call. A network encoding of a request.</para>
</glossdef>
</glossdiv>
<glossdiv>
<title>S</title>
- <glossentry xml:id="stride">
- <glossterm>Stripe </glossterm>
+ <glossentry xml:id="stripe">
+ <glossterm>Stripe</glossterm>
<glossdef>
- <para>A contiguous, logical extent of a Lustre file written to a single OST. Used
- synonymously with a single OST data object that makes up part of a file visible to user
- applications.</para>
+ <para>A contiguous, logical extent of a Lustre file written to a single
+ OST. Used synonymously with a single OST data object that makes up part
+ of a file visible to user applications.</para>
</glossdef>
</glossentry>
- <glossentry xml:id="stridesize">
- <glossterm>Stripe size </glossterm>
+ <glossentry xml:id="stripeddirectory" condition='l28'>
+ <glossterm>Striped Directory</glossterm>
<glossdef>
- <para>The maximum number of bytes that will be written to an OST object before the next
- object in a file's layout is used when writing sequential data to a file. Once a full
- stripe has been written to each of the objects in the layout, the first object will be
- written to again in round-robin fashion.</para>
+ <para>A striped directory is a feature of Lustre
+ software where metadata for files in a given directory are
+ distributed evenly over multiple MDTs. Striped directories
+ are only available in Lustre software version 2.8 or later.
+ An administrator can use a striped directory to increase
+ metadata performance by distributing the metadata requests
+ in a single directory over two or more MDTs.</para>
</glossdef>
</glossentry>
- <glossentry xml:id="stripcount">
- <glossterm>Stripe count
- </glossterm>
+ <glossentry xml:id="stridesize">
+ <glossterm>Stripe size</glossterm>
<glossdef>
- <para>The number of OSTs holding objects for a RAID0-striped Lustre file.</para>
+ <para>The maximum number of bytes that will be written to an OST object
+ before the next object in a file's layout is used when writing
+ sequential data to a file. Once a full stripe has been written to each
+ of the objects in the layout, the first object will be written to again
+ in round-robin fashion.</para>
</glossdef>
</glossentry>
- <glossentry xml:id="stripingmetadata">
- <glossterm>Striping metadata
- </glossterm>
+ <glossentry xml:id="stripcount">
+ <glossterm>Stripe count</glossterm>
<glossdef>
- <para>The extended attribute associated with a file that describes how its data is
- distributed over storage objects. See also <emphasis role="italic">Default stripe
- pattern</emphasis>.</para>
+ <para>The number of OSTs holding objects for a RAID0-striped Lustre
+ file.</para>
</glossdef>
</glossentry>
</glossdiv>
<glossdiv>
<title>T</title>
<glossentry xml:id="t10">
- <glossterm>T10 object protocol
- </glossterm>
+ <glossterm>T10 object protocol</glossterm>
<glossdef>
- <para>An object storage protocol tied to the SCSI transport layer. The Lustre file system
- does not use T10.</para>
+ <para>An object storage protocol tied to the SCSI transport layer. The
+ Lustre file system does not use T10.</para>
</glossdef>
</glossentry>
</glossdiv>
<glossdiv>
<title>W</title>
<glossentry xml:id="widestriping">
- <glossterm>Wide striping
- </glossterm>
- <glossdef>
- <para>Strategy of using many OSTs to store stripes of a single file. This obtains maximum
- bandwidth access to a single file through parallel utilization of many OSTs. For more
- information about wide striping, see <xref xmlns:xlink="http://www.w3.org/1999/xlink"
- linkend="section_syy_gcl_qk"/>.</para>
+ <glossterm>Wide striping</glossterm>
+ <glossdef>
+ <para>Strategy of using many OSTs to store stripes of a single file.
+ This obtains maximum bandwidth access to a single file through parallel
+ utilization of many OSTs. For more information about wide striping, see
+
+ <xref xmlns:xlink="http://www.w3.org/1999/xlink"
+ linkend="wide_striping" />.</para>
</glossdef>
</glossentry>
</glossdiv>