Introduction ------------ The Lustre parallel file system provides a global POSIX namespace for the computing resources of a data center. Lustre runs on Linux-based hosts via kernel modules, and delegates block storage management to the servers while providing object-based storage to its clients. Servers are responsible for both data objects (the contents of actual files) and index objects (for directory information). Data objects are gathered on Object Storage Servers (OSSs), and index objects are stored on Metadata Servers (MDSs). Each storage volume is a target with Object Storage Targets (OSTs) on OSSs, and Metadata Targets (MDTs) on MDSs. Clients assemble the data from the MDTs and OSTs to present a single coherent POSIX-compliant file system. The clients and servers communicate and coordinate among themselves via network protocols. A low-level protocol, LNet, abstracts the details of the underlying networking hardware and presents a uniform interface, originally based on Sandia Portals <>, to Lustre clients and servers. Lustre, in turn, layers its own protocol atop LNet. This document describes the Lustre protocol. The remainder of the introduction presents several concepts that illuminate the operation of the Lustre protocol. In <> a subsection is devoted to each of several semantic operations (setattr, statfs, ...). That discussion introduces the RPCs of the Lustre protocol, to give an idea of how RPCs are used to implement the file system. In <> each RPC of the Lustre protocol is presented in turn. include::client.txt[] include::target.txt[] include::rpc.txt[] include::connection.txt[] include::transno.txt[] include::path_lookup.txt[] include::lov_index.txt[] include::grant.txt[] include::ldlm.txt[] include::llog.txt[] include::security.txt[]