From 3deba5d548263c55dc9e69e13acc4140aac21eae Mon Sep 17 00:00:00 2001 From: Linda Bebernes Date: Wed, 28 Aug 2013 19:01:08 -0700 Subject: [PATCH] LUDOC-130 install-progrIF: Updated install procedures. Also LUDOC-116,158. Updates to Chapter 8 (Installing from RPMs), new content in Chapter 29 (Installing from source), updates to Chapter 33 (Programming Interfaces), new content in Chapter 16 (Upgrading). All of these chapters are interconnected by cross-references to target text that required editing. Since Ch 33 was being edited, it was updated with changes unrelated to installation as well. Signed-off-by: Linda Bebernes Change-Id: I4e2f696d043979cb4a490bd3af376bf894e784b1 Reviewed-on: http://review.whamcloud.com/6608 Tested-by: Hudson Reviewed-by: Richard Henwood --- InstallingLustre.xml | 703 +++++++++++++++++------------------ InstallingLustreFromSourceCode.xml | 742 ++++++++++++++++++++++++------------- LustreProgrammingInterfaces.xml | 162 +++++--- UpgradingLustre.xml | 478 ++++++++++++++++++------ 4 files changed, 1297 insertions(+), 788 deletions(-) diff --git a/InstallingLustre.xml b/InstallingLustre.xml index b740150..f44ca2d 100644 --- a/InstallingLustre.xml +++ b/InstallingLustre.xml @@ -1,456 +1,445 @@ - - Installing the Lustre Software - This chapter describes how to install the Lustre software. It includes: + + Installing the Lustre* Software + This chapter describes how to install the Lustre* software from RPM packages. It + includes: - - + + - - + + - For hardware/system requirements, see . + For hardware and system requirements and hardware configuration information, see .
- - <indexterm><primary>installing</primary><secondary>preparation</secondary></indexterm> - Preparing to Install the Lustre Software - If you are using a supported Linux distribution and architecture, you can install Lustre from downloaded packages (RPMs). For a list of available packages, see the Lustre Downloads page. - A list of available Lustre LNET drivers can be found in . - If you are not using a configuration with pre-built packages, you can install Lustre directly from the source code. For more information on this installation method, see . -
- <indexterm><primary>installing</primary><secondary>requirements</secondary></indexterm> -Required Software - To install Lustre, the following are required: - - - (On Linux servers only) Linux kernel patched with Lustre-specific patches for your platform and architecture. A Linux patched kernel can be installed on a client if it is desireable to use the same kernel on all nodes, but this is not required. - - - Lustre modules compiled for the Linux kernel (see for which packages are required for servers and clients in your configuration). - - - Lustre utilities required for configuring Lustre (see for which packages are required for servers and clients in your configuration). - - - (On Linux servers only) e2fsprogs package containing Lustre-specific tools (e2fsck and lfsck) used to repair a backing file system. This replaces the existing e2fsprogs package but provides complete e2fsprogs functionality - - - (Optional) Network-specific kernel modules and libraries such as the Lustre-specific OFED package required for an InfiniBand network interconnect. - - - At least one Lustre RPM must be installed on each server and on each client in a Lustre file system. lists required Lustre packages and indicates where they are to be installed. Some Lustre packages are installed on Lustre servers (MGS, MDS, and OSSs), some are installed on Lustre clients, and some are installed on all Lustre nodes. - . - - Lustre required packages, descriptions and installation guidance - - - - - - + + <indexterm> + <primary>installing</primary> + <secondary>preparation</secondary> + </indexterm> Preparing to Install the Lustre Software + You can install the Lustre software from downloaded packages (RPMs) or directly from the + source code. This chapter describes how to install the Lustre RPM packages. For information + about installing from source code, see . + The Lustre RPM packages have been tested on the Linux distributions listed in the table + below. + +
+ Lustre Test Matrix + + + + - - Lustre Package - - - Description - - - Install on servers - - Installing a patched kernel on a client node is not required. However, if a client node will be used as both a client and a server, or if you want to install the same kernel on all nodes for any reason, install the server packages designated with an asterisk (*) on the client node. - - - - Install on clients - + Lustre Version + Servers Tested1 + Clients Tested - - Lustre patched kernel RPMs: - - -   - - -   - - - - -   - - - kernel-ver_lustre.arch - - - Lustre patched server kernel. - - - X* - - -   - - - - - Lustre module RPMs: - - -   - - -   - - -   - - - - -   - - - lustre-modules-ver - - For Lustre-patched kernel. + 2.0 - X* + RHEL 5, CentOS 5 -   + RHEL 5, CentOS 5, SLES 11 SP0 -   - - - lustre-client-modules-ver - - - For clients. - - -   - - - X - - - - - Lustre utilities: - - -   - - -   - - -   - - - - -   - - - lustre-ver + 2.1.x - Lustre utilities package. This includes userspace utilities to configure and run Lustre. + RHEL 5, CentOS 5, RHEL 6, CentOS 6 -   -   - X* - - -   -   + RHEL 5, CentOS 5, RHEL 6, CentOS 6, SLES 11 SP1 -   - - - lustre-client-ver + 2.2 - Lustre utilities for clients. + RHEL 6, CentOS 6 -   - - - X + RHEL 5, CentOS 5, RHEL 6, CentOS 6, SLES 11 SP1 -   - - - lustre-ldiskfs-ver + 2.3 - Lustre-patched backing file system kernel module package for the ldiskfs file system. + RHEL 6.3, CentOS 6.3 -   - X - - -   + RHEL 6.3, CentOS 6.3, RHEL 5.8, CentOS 5.8, SLES 11 SP1 + 2.4.x -   - - - e2fsprogs-ver - - - Utilities package used to maintain the ldiskfs backing file system. - - -   - X + RHEL 6.4, CentOS 6.4 -   + RHEL 6.4, CentOS 6.4, SLES 11 SP2, FC18
- In all supported Lustre installations, a patched kernel must be run on each server, including the the MGS, the MDS, and all OSSs. Running a patched kernel on a Lustre client is only required if the client will be used for multiple purposes, such as running as both a client and an OST or if you want to use the same kernel on all nodes. - Lustre RPM packages are available on the Intel download site. They must be installed in the order described in the . -
- <indexterm><primary>installing</primary><secondary>network</secondary></indexterm> -Network-specific Kernel Modules and Libraries - Network-specific kernel modules and libraries may be needed such as the Lustre-specific OFED package required for an InfiniBand network interconnect. -
-
- <indexterm><primary>installing</primary><secondary>tools and utilities</secondary></indexterm> -Lustre-Specific Tools and Utilities - Several third-party utilities must be installed on servers: - + + 1Red Hat* Enterprise Edition*, CentOS Enterprise Linux + Distribution, SUSE Linux Enterprise Server, Fedora* F18 Linux kernel. +
+ Software Requirements + To install the Lustre software from RPMs, the following are required: + + Lustre server + packages. The required packages for Lustre servers are listed + in the table below, where ver refers to the Linux* kernel + distribution (e.g., 2.6.32-358.6.2.el6) and arch refers to + the processor architecture (e.g., x86_64). These packages are available in the Lustre + Releases repository. + + + Packages Installed on Lustre Servers + + + + + + Package Name* + Description + + + + + kernel-ver_lustre.arch + Linux kernel with Lustre patches (often referred to as "patched + kernel") + + + lustre-ver_lustre.arch + Lustre command line tools + + + lustre-modules-ver_lustre.arch + Lustre-patched kernel modules + + + lustre-ldiskfs-ver_lustre.arch + Lustre back-end file system tools + + + e2fsprogs + Utility to maintain Lustre back-end file system + + + lustre-tests-ver_lustre.arch + Lustre I/O Kit benchmarking tools (Included in + Lustre software as of Version 2.2) + + + +
+
+
- e2fsprogs : Lustre requires a recent version of e2fsprogs that understands extents and large xattr. Use e2fsprogs-1.41.90.wc4 or later, available at: - http://downloads.whamcloud.com/ - A quilt patchset of all changes to the vanilla e2fsprogs is available in e2fsprogs-{version}-patches.tgz. + Lustre client + packages. The required packages for Lustre clients are listed + in the table below, where ver refers to the Linux + distribution (e.g., 2.6.18-348.1.1.el5). These packages are available in the Lustre + Releases repository. + + + Packages Installed in Lustre Clients + + + + + + Package Name + Description + + + + + lustre-client-modules-ver + Patchless kernel modules for client + + + lustre-client-ver_lustre + Client command line tools + + + lustre-client-tests-ver + Lustre I/O Kit (Included in Lustre software as + of Version 2.2) + + + + +
+
- The Lustre-patched e2fsprogs utility is only required on machines that mount backend (ldiskfs) file systems, such as the OSS, MDS and MGS nodes. It does not need to be loaded on clients. + The version of the kernel running on a Lustre client must be the same as the + version of the lustre-client-modules-ver + package being installed. If the kernel running on the client is not compatible, a + kernel that is compatible must be installed on the client before the Lustre file + system software is installed.
- Perl - Various userspace utilities are written in Perl. Any recent version of Perl will work with Lustre. + Lustre LNET network driver + (LND). The Lustre LNDs provided with the Lustre software are + listed in the table below. For more information about Lustre LNET, see . + + Network Types Supported by Lustre LNDs + + + + + + Supported Network Types + Notes + + + + + TCP + Any network carrying TCP traffic, including GigE, 10GigE, and + IPoIB + + + InfiniBand* network + OpenFabrics OFED (o2ib) + + + gni + Gemini (Cray) + + + Seastar + Cray + + + MX + Myrinet* network + + + ra + RapidArray* interconnect + + + Elan + Quadrics + + + +
+
+
+ + + The InfiniBand* and TCP Lustre LNDs are routinely tested during release cycles. The + other LNDs are maintained by their respective owners + + + + High availability + software. If needed, install third party high-availability + software. For more information, see . + + + Optional + packages. Optional packages provided in the Lustre + Releases repository may include the following (depending on the operating + system and platform): + + kernel-debuginfo, kernel-debuginfo-common, + lustre-debuginfo, lustre-ldiskfs-debuginfo - + Versions of required packages with debugging symbols and other debugging options + enabled for use in troubleshooting. + + + kernel-devel, - Portions of the kernel tree needed to compile + third party modules, such as network drivers. + + + kernel-firmware - Standard Red Hat Enterprise Linux package + that has been recompiled to work with the Lustre kernel. + + + kernel-headers - Header files installed under /user/include and + used when compiling user-space, kernel-related code. + + + lustre-source - Source code for Lustre. + + + (Recommended) perf, + perf-debuginfo, python-perf, + python-perf-debuginfo - Linux performance analysis tools that + have been compiled to match the Lustre kernel version. + + -
-
- <indexterm><primary>installing</primary><secondary>high-availability</secondary></indexterm> -(Optional) High-Availability Software - If you plan to enable failover server functionality with Lustre (either on an OSS or the MDS), you must add high-availability (HA) software to your cluster software. For more information, see . -
-
- (Optional) Debugging Tools and Other Optional Packages - A variety of optional packages are provided on the - Intel download site. - These include debugging tools, test programs and scripts, Linux kernel and Lustre source code, and other packages. - For more information about debugging tools, see the topic - Debugging Lustre - on the Lustre wiki. -
+
-
- <indexterm><primary>installing</primary><secondary>environment</secondary></indexterm> -Environmental Requirements - Make sure the following environmental requirements are met before installing Lustre: - - - - - (Required) - - Disable Security-Enhanced Linux (SELinux) on servers and clients . Lustre does not support SELinux. Therefore, disable the SELinux system extension on all Lustre nodes and make sure other security extensions, like Novell AppArmorand network packet filtering tools (such as iptables) do not interfere with Lustre. See in below. - - - - - (Required) - - Maintain uniform user and group databases on all cluster nodes . Use the same user IDs (UID) and group IDs (GID) on all clients. If use of supplemental groups is required, verify that the identity_upcall requirements have been met. See . - - - - - (Recommended) - - Provide remote shell access to clients . Although not strictly required to run Lustre, we recommend that all cluster nodes have remote shell client access to facilitate the use of Lustre configuration and monitoring scripts. Parallel Distributed SHell (pdsh) is preferable, although Secure SHell (SSH) is acceptable. - - - - - (Recommended) - - Ensure client clocks are synchronized . Lustre uses client clocks for timestamps. If clocks are out of sync between clients, files will appear with different time stamps when accessed by different clients. Drifting clocks can also cause problems, for example, by making it difficult to debug multi-node issues or correlate logs, which depend on timestamps. We recommend that you use Network Time Protocol (NTP) to keep client and server clocks in sync with each other. For more information about NTP, see: http://www.ntp.org. - - +
+ Environmental Requirements + Before installing the Lustre software, make sure the following environmental + requirements are met. + + (Required) + Disable Security-Enhanced Linux + (SELinux) on all Lustre servers and clients. The Lustre + software does not support SELinux. Therefore, the SELinux system extension must be + disabled on all Lustre nodes. Also, make sure other security extensions (such as the + Novell AppArmor* security system) and network packet filtering tools (such as + iptables) do not interfere with the Lustre software. + + + (Required) Use the same user IDs (UID) and group IDs (GID) on all + clients. If use of supplemental groups is required, see + for information about supplementary user + and group cache upcall (identity_upcall). + + + (Recommended) Provide remote shell access to clients. It is + recommended that all cluster nodes have remote shell client access to facilitate the + use of Lustre configuration and monitoring scripts. Parallel Distributed SHell (pdsh) + is preferable, although Secure SHell (SSH) is acceptable. + + + (Recommended) Ensure client clocks are synchronized. The + Lustre file system uses client clocks for timestamps. If clocks are out of sync + between clients, files will appear with different time stamps when accessed by + different clients. Drifting clocks can also cause problems by, for example, making it + difficult to debug multi-node issues or correlate logs, which depend on timestamps. We + recommend that you use Network Time Protocol (NTP) to keep client and server clocks in + sync with each other. For more information about NTP, see: http://www.ntp.org. + +
Lustre Installation Procedure - Before installing Lustre, back up ALL data. Lustre contains kernel modifications which interact with storage devices and may introduce security issues and data loss if not installed, configured or administered properly. + Before installing the Lustre software, back up ALL data. The Lustre software contains + kernel modifications that interact with storage devices and may introduce security issues + and data loss if not installed, configured, or administered properly. - Use this procedure to install Lustre from RPMs. + To install the Lustre software from RPMs, complete the steps below. - Verify that all Lustre installation requirements have been met. - For more information on these prerequisites, see: + Verify that all Lustre installation requirements have been met. - Hardware requirements in + For hardware requirements, see . - Software and environmental requirements in + For software and environmental requirements, see the section above. - Download the Lustre RPMs. - - - On the Lustre download site, select your platform. - The files required to install Lustre (kernels, modules and utilities RPMs) are listed for the selected platform. - - - Download the required files. - - + Download the Lustre server RPMs for your platform from the Lustre Releases + repository. See for a list of required packages. - Install the Lustre packages on the servers. - - - Refer to to determine which Lustre packages are to be installed on servers for your platform and architecture. - Some Lustre packages are installed on the Lustre servers (MGS, MDS, and OSSs). Others are installed on Lustre clients. - Lustre packages must be installed in the order specified in the following steps. - - - Install the kernel, modules and ldiskfs packages. - Use the rpm -ivh command to install the kernel, module and ldiskfs packages. - - It is not recommended that you use the rpm -Uvh command to install a kernel, because this may leave you with an unbootable system if the new kernel doesn't work for some reason. - - For example, the command in the following example would install required packages on a server with Infiniband networking - $ rpm -ivh kernel-ver_lustre-ver kernel-ib-ver \ -lustre-modules-ver lustre-ldiskfs-ver - - - - Update the bootloader (grub.conf or lilo.conf) configuration file as needed. - - - Verify that the bootloader configuration file has been updated with an entry for the patched kernel. - Before you can boot to a new distribution or kernel, there must be an entry for it in the bootloader configuration file. Often this is added automatically when the kernel RPM is installed. - - - Disable security-enhanced (SE) Linux on servers and clients by including an entry in the bootloader configuration file as shown below: - selinux=0 - - - - - - Install the utilities/userspace packages. - Use the rpm -ivh command to install the utilities packages. For example: - $ rpm -ivh lustre-ver - - - - Install the e2fsprogs package. - Use the rpm -ivh command to install the e2fsprogs package. For example: - $ rpm -ivh e2fsprogs-ver - - - If e2fsprogs is already installed on your Linux system, install the Lustre-specific e2fsprogs version by using rpm -Uvh to upgrade the existing e2fsprogs package. For example: - $ rpm -Uvh e2fsprogs-ver - - The rpm command options --force or --nodeps should not be used to install or update the Lustre-specific e2fsprogs package. If errors are reported, file a bug (for instructions see the topic - Reporting Bugs - on the Lustre wiki). - - - - - (Optional) - To add optional packages to your Lustre file system, install them now. - Optional packages include file system creation and repair tools, debugging tools, test programs and scripts, Linux kernel and Lustre source code, and other packages. A complete list of optional packages for your platform is provided on the Lustre Downloads page. - - + Install the Lustre server packages on all Lustre servers (MGS, MDSs, and + OSSs). + + Log onto a Lustre server as the root user + + + Use the yum command to install the packages: + + # yum --nogpgcheck install pkg1.rpm pkg2.rpm ... + + + + Verify the packages are installed correctly: + + rpm -qa|egrep "lustre|wc"|sort + + + + Reboot the server. + + + Repeat these steps on each Lustre server. + + - Install the Lustre packages on the clients. - - - Refer to to determine which Lustre packages are to be installed on clients for your platform and architecture. - + Download the Lustre client RPMs for your platform from the Lustre Releases + repository. See for a list of required packages. + + + Install the Lustre client packages on all Lustre clients. + The version of the kernel running on a Lustre client must be the same as the + version of the lustre-client-modules-ver + package being installed. If not, a compatible kernel must be installed on the client + before the Lustre client packages are installed. + + - Install the module packages for clients. - Use the rpm -ivh command to install the lustre-client and lustre-client-modules-ver packages. For example: - $ rpm -ivh lustre-client-modules-ver kernel-ib-ver + Log onto a Lustre client as the root user. - Install the utilities/userspace packages for clients. - Use the rpm -ivh command to install the utilities packages. For example: - $ rpm -ivh lustre-client + Use the yum command to install the packages: + + # yum --nogpgcheck install pkg1.rpm pkg2.rpm ... + - Update the bootloader (grub.conf or lilo.conf) configuration file as needed. - - - Verify that the bootloader configuration file has been updated with an entry for the patched kernel. - Before you can boot to a new distribution or kernel, there must be an entry for it in the bootloader configuration file. Often this is added automatically when the kernel RPM is installed. - - - Disable security-enhanced (SE) Linux on servers and clients by including an entry in the bootloader configuration file as shown below: - selinux=0 - - + Verify the packages were installed correctly: + + # rpm -qa|egrep "lustre|kernel"|sort + - - - - Reboot the patched clients and the servers. - - If you applied the patched kernel to any clients, reboot them. - Unpatched clients do not need to be rebooted. + Reboot the client. - Reboot the servers. + Repeat these steps on each Lustre client. - To configure LNET, go next to . If default settings will be used for LNET, go to . + To configure LNET, go to . If default settings will be + used for LNET, go to .
diff --git a/InstallingLustreFromSourceCode.xml b/InstallingLustreFromSourceCode.xml index be1a2b5..102795c 100644 --- a/InstallingLustreFromSourceCode.xml +++ b/InstallingLustreFromSourceCode.xml @@ -1,7 +1,8 @@ - - - Installing Lustre from Source Code - If you need to build a customized Lustre server kernel or are using a Linux kernel that has not been tested with the version of Lustre you are installing, you may need to build and install Lustre from source code. This chapter describes: + + Installing a Lustre* File System from Source + Code + This chapter describes how to create a customized Lustre* server kernel from source code. + Sections included are: @@ -13,349 +14,566 @@ - + + + It is recommended that you install from prebuild RPMs of the Lustre software unless you + need to customize the Lustre server kernel or will be using an Linux kernel that has not been + tested with the Lustre software. + For a list of supported Linux distributions and architectures, see . Prebuild RPMs + are available in the Lustre Releases + repository. For information about installing Lustre RPMs, see . +
<indexterm><primary>installing</primary><secondary>from source</secondary></indexterm> Overview and Prerequisites - Lustre can be installed from either pre-built binary packages (RPMs) or freely-available source code. Installing from the package release is recommended unless you need to customize the Lustre server kernel or will be using an Linux kernel that has not been tested with Lustre. For a list of supported Linux distributions and architectures, visit the Lustre Support Matrix on the Lustre wiki. The procedure for installing Lustre from RPMs is describe in . To install Lustre from source code, the following are required: - Linux kernel patched with Lustre-specific patches + An x86_64 machine with a fresh installation of a CentOS (or Red Hat*) 6 Linux + operating system. - Lustre modules compiled for the Linux kernel + Access to the Lustre + Releases repository. This repository contains Lustre software patches and a test + suite. - Lustre utilities required for Lustre configuration + Access to a recent version of EPEL containing the + quilt utility used for managing a series of patches. + + + (Recommended) At least 1 GB memory on the machine + used for the build. + + + (Recommended) At least 20 GB hard disk space on the + machine used for the build. + + + Security-Enhanced Linux (SELinux) disabled on all + Lustre servers and clients. The Lustre software does not support SELinux. - The installation procedure involves several steps: + The installation procedure includes several steps: - Patching the core kernel + Patching the core kernel. - Configuring the kernel to work with Lustre + Building Lustre RPMs. - Creating Lustre and kernel RPMs from source code. + Installing and testing the Lustre file system. - - When using third-party network hardware with Lustre, the third-party modules (typically, the drivers) must be linked against the Linux kernel. The LNET modules in Lustre also need these references. To meet these requirements, a specific process must be followed to install and recompile Lustre. See , for an example showing how to install Lustre 1.6.6 using the Myricom MX 1.2.7 driver. The same process can be used for other third-party network stacks. - + These steps are described in the following sections.
<indexterm><primary>installing</primary><secondary>from source</secondary><tertiary>patching the kernel</tertiary></indexterm>Patching the Kernel - If you are using non-standard hardware, plan to apply a Lustre patch, or have another reason not to use packaged Lustre binaries, you have to apply several Lustre patches to the core kernel and run the Lustre configure script against the kernel. + This section first describes how to prepare your machine to serve as a development + environment, including build tools, the Lustre source, and the Linux kernel source. It then + describes how to apply Lustre patches to the Linux kernel. + + A patched Linux kernel is not required on Lustre clients. +
- <indexterm><primary>quilt</primary></indexterm>Introducing the <literal>quilt</literal> Utility - To simplify the process of applying Lustre patches to the kernel, we recommend that you use the quilt utility. - quilt manages a stack of patches on a single source tree. A series file lists the patch files and the order in which they are applied. Patches are applied, incrementally, on the base tree and all preceding patches. You can: - + <indexterm> + <primary>quilt</primary> + </indexterm>Provisioning the Build Machine and Installing Dependencies + + This example procedure assumes a Red Hat* Enteprise Linux* 6 operating system has been + freshly installed on a machine with the hostname rhel6-master. + + To provision the build machine and install dependencies, complete these steps. + + + Log in as root. + - Apply patches from the stack (quilt push) + Install the kernel development tools: + + # yum -y groupinstall "Development Tools" + + + If the Development Tools group is not available, you may need to install the + packages individually using: + + # yum -y install automake xmlto asciidoc + elfutils-libelf-devel zlib-devel binutils-devel + newt-devel python-devel hmaccalc perl-ExtUtils-Embed + rpm-build make gcc redhat-rpm-config patchutils git + + - Remove patches from the stack (quilt pop) + Install additional dependencies: + + # yum -y install xmlto asciidoc elfutils-libelf-devel + zlib-devel binutils-devel newt-devel python-devel hmaccalc + perl-ExtUtils-Embed + - Query the contents of the series file (quilt series), the contents of the stack (quilt applied, quilt previous, quilt top), and the patches that are not applied at a particular moment (quilt next, quilt unapplied). + Install EPEL: + + # wget + http://download.fedoraproject.org/pub/epel/5/x86_64/ + epel-release-5-4.noarch.rpm +# rpm -ivh ./epel-release-5-4.noarch.rpm + + - Edit and refresh (update) patches with quilt, as well as revert inadvertent changes, and fork or clone patches and show the diffs before and after work. + Install quilt: + + # yum -y install quilt + + + + + newt-devel may not be available for RHEL6. One option is to + download the newt-devel, slang-devel, and asciidoc RPMs from CentOS and install using: + + yum --nogpgcheck localinstall + ./newt-devel-0.52.11-3.el6.x86_64.rpm + ./slang-devel-2.2.1-1.el6.x86_64.rpm + ./asciidoc-8.4.5-4.1.el6.noarch.rpm + + + - - A variety of quilt packages (RPMs, SRPMs and tarballs) are available from various sources. Use the most recent version you can find. quilt depends on several other utilities, e.g., the coreutils RPM that is only available in RedHat 9. For other RedHat kernels, you have to get the required packages to successfully install quilt. If you cannot locate a quilt package or fulfill its dependencies, you can build quilt from a tarball, available at the quilt project website: - http://savannah.nongnu.org/projects/quilt - For additional information on using Quilt, including its commands, see Introduction to Quilt and the quilt(1) man page. +
- Get the Lustre Source and Unpatched Kernel - The Lustre Engineering Team has targeted several Linux kernels for use with Lustre servers (MDS/OSS) and provides a series of patches for each one. The Lustre patches are maintained in the kernel_patch directory bundled with the Lustre source code. - - Lustre contains kernel modifications which interact with storage devices and may introduce security issues and data loss if not installed, configured and administered correctly. Before installing Lustre, be cautious and back up ALL data. - - - Each patch series has been tailored to a specific kernel version, and may or may not apply cleanly to other versions of the kernel. - - To obtain the Lustre source and unpatched kernel: + Preparing the Lustre Source + To prepare the Lustre source, complete these steps. + + + Create a user called build with a home directory + /home/build: + + # useradd -m build + + + + + Switch to the user called build and change the directory to + the $HOME/build directory: + + # su build +# cd $HOME + + + + + Get the MASTER branch of the Lustre software from the Lustre repository: + + # git clone git://git.whamcloud.com/fs/lustre-release.git +# cd lustre-release + + + + + Run: + + sh ./autogen.sh + + + + + Resolve any outstanding dependencies. + # sh ./autogen.shWhen autogen.sh completes + successfully, a response similar to the following is displayed: + + Checking for a complete tree... +checking forautomake-1.9>= 1.9... found 1.9.6 +... +... +configure.ac:10: installing `./config.sub' +configure.ac:12: installing `./install-sh' +configure.ac:12: installing `./missing' +Running autoconf + + + +
+
+ Preparing the Kernel Source + To build the kernel using rpmbuild (a tool specific to RPM-based + distributions), complete these steps. - Verify that all of the Lustre installation requirements have been met. - For more information on these prerequisites, see: - - - Hardware requirements in . - - - Software and environmental requirements in - - + Get the kernel source: + + # cd $HOME +# mkdir -p kernel/rpmbuild/{BUILD,RPMS,SOURCES,SPECS,SRPMS} +# cd kernel +# echo '%_topdir %(echo $HOME)/kernel/rpmbuild'> ~/.rpmmacros + + - Download the Lustre source code. - On the Lustre download site, select a version of Lustre to download and then select 'Source' as the platform. + Install the kernel source (enter on one line): + + # rpm -ivh http://ftp.redhat.com/pub/redhat/linux/enterprise/ + 6Server/en/os/SRPMS/kernel-2.6.32-131.2.1.el6.src.rpm + 2>&1 | grep -v mockb + + + It is intended that the Lustre software MASTER branch be kept up-to-date with the + most recent kernel distributions. However, a delay may occur between a periodic update + to a kernel distribution and a corresponding update of the MASTER branch. The most + recent supported kernel in the MASTER branch can be found in the source directory in + lustre/kernel_patches/which_patch. If the MASTER branch is + not current with the latest distribution, download the most recent kernel RPMs from + the vendor's download site. + - Download the unpatched kernel. - Visit your Linux distributor for the Kernel source. + Prepare the source using rpmbuild: + + # cd ~/kernel/rpmbuild +# rpmbuild -bp --target=`uname -m` ./SPECS/kernel.spec + + + The text displayed ends with the following: + + ... +gpg: Total number processed: 1 +gpg: imported: 1 ++ gpg --homedir . --export --keyring ./kernel.pub Red +gpg: WARNING: unsafe permissions on homedir `.' ++ gcc -o scripts/bin2c scripts/bin2c.c ++ scripts/bin2c ksign_def_public_key __initdata ++ cd .. ++ exit 0 + + The kernel source with the Red Hat Enterprise Linux patches applied is now residing in + the directory + /home/build/kernel/rpmbuild/BUILD/kernel-2.6.32-131.2.1.el6/linux-2.6.32-131.2.1.el6.x86_64/
- Patch the Kernel - This procedure describes how to use Quilt to apply the Lustre patches to the kernel. To illustrate the steps in this procedure, a RHEL 5 kernel is patched for Lustre 1.6.5.1. + Patching the Kernel Source with the Lustre Code + To patch the kernel source with the Lustre code, complete these steps. - Unpack the Lustre source and kernel to separate source trees. - - - Unpack the Lustre source. - For this procedure, we assume that the resulting source tree is in /tmp/lustre-1.6.5.1 - - - Unpack the kernel. - For this procedure, we assume that the resulting source tree (also known as the destination tree) is in /tmp/kernels/linux-2.6.18 - - + Go to the directory containing the kernel source: + + #cd ~/kernel/rpmbuild/BUILD/kernel-2.6.32-131.2.1.el6/ + linux-2.6.32-131.2.1.el6.x86_64/ + - Select a config file for your kernel, located in the kernel_configs directory (lustre/kernel_patches/kernel_config). - The kernel_config directory contains the .config files, which are named to indicate the kernel and architecture with which they are associated. For example, the configuration file for the 2.6.18 kernel shipped with RHEL 5 (suitable for i686 SMP systems) is kernel-2.6.18-2.6-rhel5-i686-smp.config. + Edit the Makefile file in this directory to add a unique build + id to be able to ascertain that the kernel is booted. Modify line 4 as shown + below: + + EXTRAVERSION = .lustremaster + - Select the series file for your kernel, located in the series directory (lustre/kernel_patches/series). - The series file contains the patches that need to be applied to the kernel. + Overwrite the .config file in this directory with the Lustre + .config file: + + # cp ~/lustre-release/lustre/kernel_patches/kernel_configs/ + kernel-2.6.32-2.6-rhel6-x86_64.config ./.config + - Set up the necessary symlinks between the kernel patches and the Lustre source. - This example assumes that the Lustre source files are unpacked under /tmp/lustre-1.6.5.1 and you have chosen the 2.6-rhel5.series file). Run: - $ cd /tmp/kernels/linux-2.6.18 -$ rm -f patches series -$ ln -s /tmp/lustre-1.6.5.1/lustre/kernel_patches/series/2.6-rhel5.series ./series -$ ln -s /tmp/lustre-1.6.5.1/lustre/kernel_patches/patches . + Link the Lustre series and patches: + + # ln -s ~/lustre-release/lustre/kernel_patches/series/ + 2.6-rhel6.series series +# ln -s ~/lustre-release/lustre/kernel_patches/patches patches + - Use quilt to apply the patches in the selected series file to the unpatched kernel. Run: - $ cd /tmp/kernels/linux-2.6.18 -$ quilt push -av - The patched destination tree acts as a base Linux source tree for Lustre. + Apply the patches to the kernel source using quilt: + + # quilt push -av + + The following is displayed: + ... +... +patching file fs/jbd2/transaction.c +Hunk #3succeeded at 1222(offset 3lines). +Hunk #4succeeded at 1357(offset 3lines). +Now at patch patches/jbd2-jcberr-2.6-rhel6.patch
- - <indexterm><primary>installing</primary><secondary>from source</secondary><tertiary>packaging</tertiary></indexterm> - - Creating and Installing the Lustre Packages - After patching the kernel, configure it to work with Lustre, create the Lustre packages (RPMs) and install them. - - - Configure the patched kernel to run with Lustre. Run: - $ cd /path/to/kernel/tree -$ cp /boot/config-`uname -r` .config -$ make oldconfig || make menuconfig -$ make include/asm -$ make include/linux/version.h -$ make SUBDIRS=scripts -$ make include/linux/utsrelease.h - - - Run the Lustre configure script against the patched kernel and create the Lustre packages. - $ cd /path/to/lustre/source/tree -$ ./configure --with-linux=/path/to/kernel/tree -$ make rpms - This creates a set of .rpms in /usr/src/redhat/RPMS/arch with an appended date-stamp. The SuSE path is /usr/src/packages. - - You do not need to run the Lustre configure script against an unpatched kernel. - - Example set of RPMs: - lustre-1.6.5.1-\2.6.18_53.xx.xx.el5_lustre.1.6.5.1.custom_20081021.i686.rpm - -lustre-debuginfo-1.6.5.1-\2.6.18_53.xx.xx.el5_lustre.1.6.5.1.custom_20081021.i686.rpm - -lustre-modules-1.6.5.1-\2.6.18_53.xx.xxel5_lustre.1.6.5.1.custom_20081021.i686.rpm - -lustre-source-1.6.5.1-\2.6.18_53.xx.xx.el5_lustre.1.6.5.1.custom_20081021.i686.rpm - - If the steps to create the RPMs fail, contact Lustre Support by reporting a bug. See . - - - Several features and packages are available that extend the core functionality of Lustre. These features/packages can be enabled at the build time by issuing appropriate arguments to the configure command. For a list of these features and packages, run ./configure -help in the Lustre source tree. The configs/ directory of the kernel source contains the config files matching each the kernel version. Copy one to .config at the root of the kernel tree. - - - - Create the kernel package. Navigate to the kernel source directory and run: - $ make rpm - Example result: - kernel-2.6.95.0.3.EL_lustre.1.6.5.1custom-1.i686.rpm - - Step is only valid for RedHat and SuSE kernels. If you are using a stock Linux kernel, you need to get a script to create the kernel RPM. - - - - Install the Lustre packages. - Some Lustre packages are installed on servers (MDS and OSSs), and others are installed on Lustre clients. For guidance on where to install specific packages, see that lists required packages and for each package and where to install it. Depending on the selected platform, not all of the packages listed in need to be installed. - - Running the patched server kernel on the clients is optional. It is not necessary unless the clients will be used for multiple purposes, for example, to run as a client and an OST. - - Lustre packages should be installed in this order: - - - Install the kernel, modules and ldiskfs packages. - Navigate to the directory where the RPMs are stored, and use the rpm -ivh command to install the kernel, module and ldiskfs packages. - $ rpm -ivh kernel-lustre-smp-ver \ -kernel-ib-ver \ -lustre-modules-ver \ -lustre-ldiskfs-ver - + + <indexterm> + <primary>installing</primary> + <secondary>from source</secondary> + <tertiary>packaging</tertiary> + </indexterm> Building the Lustre RPMs + This section describes how to configure the patched kernel to work with the Lustre + software and how to create and install the Lustre packages (RPMs). +
+ Building a New Kernel + To build a new kernel as an RPM, complete these steps. + + + In the kernel source directory, build a kernel rpm: + # cd ~/kernel/rpmbuild/BUILD/kernel-2.6.32-131.2.1.el6/ + linux-2.6.32-131.2.1.el6.x86_64/ +# make oldconfig || make menuconfig +# make include/asm +# make include/linux/version.h +# make SUBDIRS=scripts +# make include/linux/utsrelease.h +# make rpm + A successful build returns text similar to the following: + ... +... +Wrote: /home/build/kernel/rpmbuild/SRPMS/ + kernel-2.6.32lustremaster-1.src.rpm +Wrote: /home/build/kernel/rpmbuild/RPMS/x86_64/ + kernel-2.6.32.lustremaster-1.x86_64.rpm +Executing(%clean): /bin/sh -e /var/tmp/rpm-tmp.f73m1V ++ umask 022+ cd /home/build/kernel/rpmbuild/BUILD ++ cd kernel-2.6.32lustremaster ++ rm -rf /home/build/kernel/rpmbuild/BUILDROOT/ + kernel-2.6.32.lustremaster-1.x86_64 ++ exit 0 +rm ../kernel-2.6.32lustremaster.tar.gz + + If a request to generate more entropy appears, some disk or keyboard I/O needs to + be generated. You can generate entropy by entering the following on another + terminal:# grep -Ri 'any_text' /usr + + + + A fresh kernel RPM can now be found at + ~/kernel/rpmbuild/RPMS/x86_64/kernel-2.6.32.lustremaster-1.x86_64.rpm. +
+
+ Configuring and Building Lustre RPMs + To configure and build a set of Lustre RPMs, complete these steps. - Install the utilities/userspace packages. - Use the rpm -ivh command to install the utilities packages. For example: - $ rpm -ivh lustre-ver + Configure the Lustre source: + + # cd ~/lustre-release/ +# ./configure --with-linux=/home/build/kernel/rpmbuild/BUILD/ + kernel-2.6.32.lustremaster/ + + Text similar to the following is displayed: + + ... +... +LLCPPFLAGS: -D__arch_lib__ -D_LARGEFILE64_SOURCE=1 +CFLAGS: -g -O2 -Werror +EXTRA_KCFLAGS: -include /home/build/lustre-release/config.h -g + -I/home/build/lustre-release +/libcfs/include -I/home/build/lustre-release/lnet/include + -I/home/build/lustre-release/ +lustre/include +LLCFLAGS: -g -Wall -fPIC -D_GNU_SOURCE +Type 'make' to build Lustre. + - Install the e2fsprogs package. - Make sure the e2fsprogs package is unpacked, and use the rpm -i command to install it. For example: - $ rpm -i e2fsprogs-ver + Create the RPMs: + + # make rpms + + Text similar to the following is displayed: + + ... +... +Executing(%clean): /bin/sh -e /var/tmp/rpm-tmp.TsLWpD ++ umask 022 ++ cd /home/build/kernel/rpmbuild/BUILD ++ cd lustre-2.0.61 ++ rm -rf /home/build/kernel/rpmbuild/BUILDROOT/ + lustre-2.0.61-2.6.32_lustremaster_g0533e7b.x86_64 ++ exit 0 +make[1]: Leaving directory `/home/build/lustre-release' + + + The resulting RPMs are in + ~build/kernel/rpmbuild/RPMS/x86_64/.kernel-2.6.32lustremaster-1.x86_64.rpm +lustre-2.0.61-2.6.32.lustremaster_g0533e7b.x86_64.rpm +lustre-debuginfo-2.0.61-2.6.32.lustremaster_g0533e7b.x86_64.rpm +lustre-ldiskfs-3.3.0-2.6.32.lustremaster_g0533e7b.x86_64.rpm +lustre-ldiskfs-debuginfo-3.3.0-2.6.32.lustremaster_g0533e7b.x86_64.rpm +lustre-modules-2.0.61-2.6.32.lustremaster_g0533e7b.x86_64.rpm +lustre-source-2.0.61-2.6.32.lustremaster_g0533e7b.x86_64.rpm +lustre-tests-2.0.61-2.6.32.lustremaster_g0533e7b.x86_64.rpm +
+
+ Installing the Lustre Kernel + To install the Lustre kernel, complete these steps. - (Optional) If you want to add optional packages to your Lustre system, install them now. + As root, install the kernel: + + # rpm -ivh ~build/kernel/rpmbuild/RPMS/x86_64/ + kernel-2.6.32.lustremaster-1.x86_64.rpm + - - - - Verify that the boot loader (grub.conf or lilo.conf) has been updated to load the patched kernel. - - - Reboot the patched clients and the servers. - - If you applied the patched kernel to any clients, reboot them. - Unpatched clients do not need to be rebooted. + Create initrd using dracut: + + # /sbin/new-kernel-pkg --package kernel --mkinitrd --dracut + --depmod --install 2.6.32.lustremaster + - Reboot the servers. - Once all the machines have rebooted, the next steps are to configure Lustre Networking (LNET) and the Lustre file system. See . + (Optional) To mount the Lustre file system at boot, modify + /etc/fstab to add required entries. + In Lustre Release 2.4, the script + /etc/init.d/lustre has been provided to allow the Lustre file + system to be enabled as a service. To enable the service, enter the command + chkconfig lustre on. - - - -
-
- - <indexterm><primary>installing</primary><secondary>from source</secondary><tertiary>3rd-party network</tertiary></indexterm> - Installing Lustre with a Third-Party Network Stack - When using third-party network hardware, you must follow a specific process to install and recompile Lustre. This section provides an installation example, describing how to install Lustre 1.6.6 while using the Myricom MX 1.2.7 driver. The same process is used for other third-party network stacks, by replacing MX-specific references in with the stack-specific build and using the proper --with option when configuring the Lustre source code. - - - Compile and install the Lustre kernel. - - Install the necessary build tools. - GCC and related tools must be installed. For more information, see . - $ yum install rpm-build redhat-rpm-config -$ mkdir -p rpmbuild/{BUILD,RPMS,SOURCES,SPECS,SRPMS} -$ echo '%_topdir %(echo $HOME)/rpmbuild' > .rpmmacros + (Optional) Customize the LNET configuration: + + # vi /etc/modprobe.d/lustre.conf + + In Lustre Release 2.4, the script + /etc/init.d/lnet has been provided to allow LNET to be enabled as + a service (for example, on a router). To enable this service, enter the command + chkconfig lnet on. - Install the patched Lustre source code. - This RPM is available at the Lustre download site. - $ rpm -ivh \ -kernel-lustre-source-2.6.18-92.1.10.el5_lustre.1.6.6.x86_64.rpm + Reboot the system: + + reboot + + A login prompt such as that shown below indicates + success:Red Hat Enterprise Linux Server release 6.0(Santiago) +Kernel 2.6.32lustremaster on an x86_64 + +client-10login: + + +
+
+
+ Installing and Testing a Lustre File System + This section describes how to install the Lustre RPMs and run the Lustre test + suite. +
+ Installing <package>e2fsprogs</package> + The e2fsprogs package is required to run the test suite. To download + and install e2fsprogs, complete these steps. - Build the Linux kernel RPM. - $ cd /usr/src/linux-2.6.18-92.1.10.el5_lustre.1.6.6 -$ make distclean -$ make oldconfig dep bzImage modules -$ cp /boot/config-`uname -r` .config -$ make oldconfig || make menuconfig -$ make include/asm -$ make include/linux/version.h -$ make SUBDIRS=scripts -$ make rpm + Download e2fsprogs from the Lustre + Releases repository. - Install the Linux kernel RPM. - If you are building a set of RPMs for a cluster installation, this step is not necessary. Source RPMs are only needed on the build machine. - $ rpm -ivh \ -~/rpmbuild/kernel-lustre-2.6.18-92.1.10.el5_\lustre.1.6.6.x86_64.rpm -$ mkinitrd /boot/2.6.18-92.1.10.el5_lustre.1.6.6 + Install the e2fsprogs package: + + # rpm -Uvh + ./e2fsprogs-1.42.6.wc2-7.el6.x86_64.rpm + ./e2fsprogs-libs-1.42.6.wc2-7.el6.x86_64.rpm + + + + +
+
+ Installing the Lustre RPMs + To install the Lustre RPMs, complete these steps as root: - Update the boot loader (/etc/grub.conf) with the new kernel boot information. - $ /sbin/shutdown 0 -r + Make the directory containing the Lustre RPMs your current directory: + + # cd /home/build/kernel/rpmbuild/RPMS/x86_64/ + + - - - - Compile and install the MX stack. - $ cd /usr/src/ -$ gunzip mx_1.2.7.tar.gz (can be obtained from www.myri.com/scs/) -$ tar -xvf mx_1.2.7.tar -$ cd mx-1.2.7 -$ ln -s common include -$ ./configure --with-kernel-lib -$ make -$ make install - - - Compile and install the Lustre source code. - - Install the Lustre source (this can be done via RPM or tarball). The source file is available at the Lustre Git site. This example shows installation via the tarball. - $ cd /usr/src/ -$ gunzip lustre-1.6.6.tar.gz -$ tar -xvf lustre-1.6.6.tar - + Install the RPMs: + + # rpm -ivh lustre-ldiskfs-3.3.0-2.6.32.lustremaster* +# rpm -ivh lustre-modules-2.0.61-2.6.32.lustremaster* +# rpm -ivh lustre-2.0.61-2.6.32.lustremaster_* +# rpm -ivh lustre-tests-* + + +
+
+ Running the Test Suite + To test a single node Lustre file system installation, complete these steps. - Configure and build the Lustre source code. - The ./configure --help command shows a list of all of the --with options. All third-party network stacks are built in this manner. - $ cd lustre-1.6.6 -$ ./configure --with-linux=/usr/src/linux \ ---with-mx=/usr/src/mx-1.2.7 -$ make -$ make rpms - The make rpms command output shows the location of the generated RPMs + Run the test suite. + + # /usr/lib64/lustre/tests/llmount.sh + + Text similar to the following will be + displayed:Loading modules from /usr/lib64/lustre/tests/.. +debug=0x33f0404 +subsystem_debug=0xffb7e3ff +gss/krb5 is not supported +Formatting mgs, mds, osts +Format mds1: /tmp/lustre-mdt1 +Format ost1: /tmp/lustre-ost1 +Format ost2: /tmp/lustre-ost2 +Checking servers environments +Checking clients rhel6-master environments +Loading modules from /usr/lib64/lustre/tests/.. +debug=0x33f0404 +subsystem_debug=0xffb7e3ff +gss/krb5 is not supported +Setup mgs, mdt, osts +Starting mds1: -o loop,user_xattr,acl +/tmp/lustre-mdt1 /mnt/mds1 +debug=0x33f0404 +subsystem_debug=0xffb7e3ff +debug_mb=10 +Started lustre-MDT0000 +Starting ost1: -o loop /tmp/lustre-ost1 /mnt/ost1 +debug=0x33f0404 +subsystem_debug=0xffb7e3ff +debug_mb=10 +Started lustre-OST0000 +Starting ost2: -o loop /tmp/lustre-ost2 /mnt/ost2 +debug=0x33f0404 +subsystem_debug=0xffb7e3ff +debug_mb=10 +Started lustre-OST0001 +Starting client: rhel5-build: -o user_xattr,acl,flock +rhel6-master@tcp:/lustre /mnt/lustre +debug=0x33f0404 +subsystem_debug=0xffb7e3ff +debug_mb=10 +Using TIMEOUT=20 +disable quota as required + The Lustre file system is now available at + /mnt/lustre. + + If you see the error below, associate the IP address of a non-loopback interface + with the name of your machine in the file + /etc/hosts.mkfs.lustre: Can't parse NID 'rhel6-master@tcp' + - - - - Use the rpm -ivh command to install the RPMS. - $ rpm -ivh \ -lustre-1.6.6-2.6.18_92.1.10.el5_lustre.1.6.6smp.x86_64.rpm -$ rpm -ivh \ -lustre-modules-1.6.6-2.6.18_92.1.10.el5_lustre.1.6.6\ -smp.x86_64.rpm -$ rpm -ivh \ -lustre-ldiskfs-3.0.6-2.6.18_92.1.10.el5_lustre.1.6.6\ -smp.x86_64.rpm - - - Add the following lines to the /etc/modprobe.d/lustre.conf file. - options kmxlnd hosts=/etc/hosts.mxlnd -options lnet networks=mx0(myri0),tcp0(eth0) - - - Populate the myri0 configuration with the proper IP addresses. - vim /etc/sysconfig/network-scripts/myri0 - - - Add the following line to the /etc/hosts.mxlnd file. - $ IP HOST BOARD EP_ID - - - Start Lustre. - Once all the machines have rebooted, the next steps are to configure Lustre Networking (LNET) and the Lustre file system. See . - - + +
diff --git a/LustreProgrammingInterfaces.xml b/LustreProgrammingInterfaces.xml index 300645c..cf67023 100644 --- a/LustreProgrammingInterfaces.xml +++ b/LustreProgrammingInterfaces.xml @@ -1,7 +1,8 @@ - - - Lustre Programming Interfaces - This chapter describes public programming interfaces to control various aspects of Lustre from userspace. These interfaces are generally not guaranteed to remain unchanged over time, although we will make an effort to notify the user community well in advance of major changes. This chapter includes the following section: + + Lustre* Programming Interfaces + This chapter describes public programming interfaces to that can be used to control various + aspects of a Lustre* file system from userspace. This chapter includes the following + sections: @@ -14,57 +15,109 @@ Lustre programming interface man pages are found in the lustre/doc folder.
- <indexterm><primary>programming</primary><secondary>upcall</secondary></indexterm>User/Group Cache Upcall - This section describes user and group upcall. + <indexterm> + <primary>programming</primary> + <secondary>upcall</secondary> + </indexterm>User/Group Upcall + This section describes the supplementary user/group upcall, which allows the MDS to + retrieve and verify the supplementary groups to which a particular user is assigned. This + avoids the need to pass all the supplementary groups from the client to the MDS with every + RPC. - For information on a universal UID/GID, see . + For information about universal UID/GID requirements in a Lustre file system + ennvironment, see .
- Name - Use /proc/fs/lustre/mdt/${FSNAME}-MDT{xxxx}/identity_upcall to look up a given user's group membership. + Synopsis + The MDS uses the utility lctl get_param + mdt.${FSNAME}-MDT{xxxx}.identity_upcall to look up the supplied UID in order to + retrieve the user's supplementary group membership. The result is temporarily cached in the + kernel (for five minutes, by default) to avoid the overhead of calling into userspace + repeatedly.
Description - The group upcall file contains the path to an executable that is invoked to resolve a numeric UID to a group membership list. This utility opens /proc/fs/lustre/mdt/{mdtname}/identity_info and writes the related identity_downcall_data data structure (see ). The data is persisted with lctl set_param mdt.{mdtname}.identity_info. + The identity upcall file contains the path to an executable that is invoked to resolve a + numeric UID to a group membership list. This utility opens + /proc/fs/lustre/mdt/${FSNAME}-MDT{xxxx}/identity_info and fills in the + related identity_downcall_data data structure (see ). The data is persisted with lctl set_param + mdt.${FSNAME}-MDT{xxxx}.identity_info. For a sample upcall program, see lustre/utils/l_getidentity.c in the Lustre source distribution.
Primary and Secondary Groups The mechanism for the primary/secondary group is as follows: - The MDS issues an upcall (set per MDS) to map the numeric UID to the supplementary group(s). - - - If there is no upcall or if there is an upcall and it fails, supplementary groups will be added as supplied by the client (as they are now). + The MDS issues an upcall (set per MDS) to map the numeric UID to the supplementary + group(s). - The default upcall is /usr/sbin/l_getidentity, which can interact with the user/group database to obtain UID/GID/suppgid. The user/group database depends on authentication configuration, and can be local /etc/passwd, NIS, LDAP, etc. If necessary, the administrator can use a parse utility to set /proc/fs/lustre/mdt/${FSNAME}-MDT{xxxx}/identity_upcall. If the upcall interface is set to NONE, then upcall is disabled. The MDS uses the UID/GID/suppgid supplied by the client. + If there is no upcall or if there is an upcall and it fails, one supplementary + group at most will be added as supplied by the client. - The default group upcall is set by mkfs.lustre. Use tunefs.lustre --param or lctl set_param mdt.{mdtname}.identity_upcall={path} + The default upcall is /usr/sbin/l_getidentity, which can + interact with the user/group database to obtain UID/GID/suppgid. The user/group + database depends on how authentication is configured, such as local + /etc/passwd, Network Information Service (NIS), or Lightweight + Directory Access Protocol (LDAP). If necessary, the administrator can use a parse + utility to set + /proc/fs/lustre/mdt/${FSNAME}-MDT{xxxx}/identity_upcall. If the + upcall interface is set to NONE, then upcall is disabled. The MDS uses the + UID/GID/suppgid supplied by the client. - The Lustre administrator can specify permissions for a specific UID by configuring /etc/lustre/perm.conf on the MDS. As commented in lustre/utils/l_getidentity.c + The default group upcall is set by mkfs.lustre. Use + tunefs.lustre --param or lctl set_param + mdt.${FSNAME}-MDT{xxxx}.identity_upcall={path} - -/* -* permission file format is like this: -* {nid} {uid} {perms} -* -* '*' nid means any nid -* '*' uid means any uid -* the valid values for perms are: -* setuid/setgid/setgrp/rmtacl -- enable corresponding perm -* nosetuid/nosetgid/nosetgrp/normtacl -- disable corresponding perm -* they can be listed together, separated by ',', -* when perm and noperm are in the same line (item), noperm is preferential, -* when they are in different lines (items), the latter is preferential, -* '*' nid is as default perm, and is not preferential.*/ - Currently, rmtacl/normtacl can be ignored (part of security functionality), and used for remote clients. The /usr/sbin/l_getidentity utility can parse /etc/lustre/perm.conf to obtain permission mask for specified UID. + A Lustre administrator can specify permissions for a specific UID by configuring + /etc/lustre/perm.conf on the MDS. The + /usr/sbin/l_getidentity utility parses + /etc/lustre/perm.conf to obtain the permission mask for a specified + UID. + The permission file format + is:{nid} {uid} {perms}An + asterisk (*) in the nid column or + uid column matches any NID or UID respectively. When '*' is + specified as the NID, it is used for the default permissions for all NIDS, unless + permissions are specified for a particular NID. In this case the specified permissions + take precedence for that particular NID. Valid values for + perms are: + + setuid/setgid/setgrp/XXX - + enables the corresponding perm + + + nosetuid/nosetgid/nosetgrp/noXXX + - disables the corresponding perm + + Permissions can be specified in a comma-separated list. When a + perm and a noperm permission are + listed in the same line, the noperm permission takes + precedence. When they are listed on separate lines, the permission that appears later + takes precedence. + + + The /usr/sbin/l_getidentity utility can parse + /etc/lustre/perm.conf to obtain the permission mask for the + specified UID. + - To avoid repeated upcalls, the MDS caches supplemental group information. Use /proc/fs/lustre/mdt/${FSNAME}-MDT{xxxx}/identity_expire to set the cache time (default is 600 seconds). The kernel waits for the upcall to complete (at most, 5 seconds) and takes the "failure" behavior as described. Set the wait time in /proc/fs/lustre/mdt/${FSNAME}-MDT{xxxx}/identity_acquire_expire (default is 15 seconds). Cached entries are flushed by writing to /proc/fs/lustre/mdt/${FSNAME}-MDT{xxxx}/identity_flush. + To avoid repeated upcalls, the MDS caches supplemental group information. Use + lctl set_param mdt.*.identity_expire=<seconds> to set the + cache time. The default cache time is 600 seconds. The kernel waits for the upcall to + complete (at most, 5 seconds) and takes the "failure" behavior as described. + Set the wait time using lctl set_param + mdt.*.identity_acquire_expire=<seconds> to change the length of time + that the kernel waits for the upcall to finish. Note that the client process is + blocked during this time. The default wait time is 15 seconds. + Cached entries are flushed using lctl set_param + mdt.${FSNAME}-MDT{xxxx}.identity_flush=0.
@@ -82,34 +135,45 @@
Data Structures - struct identity_downcall_data { - __u32 idd_magic; - __u32 idd_err; - __u32 idd_uid; - __u32 idd_gid; - __u32 idd_nperms; - struct perm_downcall_data idd_perms[N_PERMS_MAX]; - __u32 idd_ngroups; - __u32 idd_groups[0]; -}; + struct perm_downcall_data{ + __u64 pdd_nid; + __u32 pdd_perm; + __u32 pdd_padding; +}; + +struct identity_downcall_data{ + __u32 idd_magic; + : + :
<indexterm><primary>programming</primary><secondary>l_getidentity</secondary></indexterm><literal>l_getidentity</literal> Utility - The l_getidentity utility handles Lustre user/group cache upcall. + The l_getidentity utility handles the Lustre supplementary group upcall + by default as described in the previous section.
Synopsis - l_getidentity [-v] [-d|mdsname] uid] -l_getidentity [-v] -s + l_getidentity ${FSNAME}-MDT{xxxx} {uid}
Description - The group upcall file contains the path to an executable that is invoked to resolve a numeric UID to a group membership list. This utility opens /proc/fs/lustre/mdt/{mdtname}/identity_info and writes the related identity_downcall_data data structure (see ). The data is persisted with lctl set_param mdt.{mdtname}.identity_info. - l_getidentity is the reference implementation of the user/group cache upcall. + The identity upcall file contains the path to an executable that is invoked to resolve a + numeric UID to a group membership list. This utility opens + /proc/fs/lustre/mdt/${FSNAME}-MDT{xxxx}/identity_info, completes the + identity_downcall_data data structure (see ) and writes the data to the + /proc/fs/lustre/mdt/${FSNAME}-MDT{xxxx}/identity_info pseudo file. The + data is persisted with lctl set_param + mdt.${FSNAME}-MDT{xxxx}.identity_info. + l_getidentity is the reference implementation of the user/group cache + upcall.
Files - /proc/fs/lustre/mdt/{mdt-name}/identity_upcall + + /proc/fs/lustre/mdt/${FSNAME}-MDT{xxxx}/identity_upcall + /proc/fs/lustre/mdt/${FSNAME}-MDT{xxxx}/identity_info +
diff --git a/UpgradingLustre.xml b/UpgradingLustre.xml index 2a03433..e8aa5f0 100644 --- a/UpgradingLustre.xml +++ b/UpgradingLustre.xml @@ -1,146 +1,384 @@ - - - Upgrading Lustre - This chapter describes Lustre interoperability and how to upgrade from Lustre 1.8 to Lustre 2.x, and includes the following sections: + + Upgrading a Lustre* File System + This chapter describes interoperability between Lustre* software releases. It also provides + procedures for upgrading from a Lustre 1.8 release to a Lustre 2.x release, from a Lustre 2.x + release to a more recent Lustre 2.x release (major release upgrade), and from a Lustre 2.X.y + release to 2.X.y (minor release upgrade). It includes the following sections: - Lustre Interoperability + - Upgrading Lustre 1.8 to 2.x + - Upgrading to multiple metadata targets +
- <indexterm><primary>Lustre</primary><secondary>upgrading</secondary><see>upgrading</see></indexterm> - <indexterm><primary>upgrading</primary></indexterm> - - Lustre Interoperability - Lustre 2.x is built on a new architectural code base which is different than the one used with Lustre 1.8. These architectural changes require existing Lustre 1.8 users to follow a slightly different procedure to upgrade to Lustre 2.x - requiring clients to be unmounted and the file system be shut down. Once the servers are upgraded and restarted, then the clients can be remounted. After the upgrade, Lustre 2.x servers can interoperate with compatible 1.8 clients and servers. Lustre 2.x does not support 2.x clients interoperating with 1.8 servers. - - Lustre 1.8 clients can interoperate with 2.x servers, but the servers should all be upgraded at the same time. - - - Lustre 2.x servers are compatible with clients 1.8.6 and later, though it is strongly recommended that the clients are upgraded to the latest version of Lustre 1.8 available. If you are planning a heterogeneous environment (mixed 1.8 and 2.x servers), make sure that version 1.8.6 or later is installed on the client nodes that are not upgraded to 2.x. - - Lustre 2.4 allows remote sub-directories to be hosted on separate MDTs. Clients prior to 2.4 can only see the namespace hosted by MDT0, and will return an IO error if a directory on a remote MDT is accessed. -
-
- <indexterm><primary>upgrading</primary><secondary>1.8 to 2.x</secondary></indexterm>Upgrading Lustre 1.8 to 2.x - Upgrading from 1.8 to Lustre 2.x involves shutting down the file system and upgrading servers, and optionally clients, all at the same time. This upgrade process does not support a rolling upgrade in which the file system operates continuously while individual servers (or their failover partners) and clients are upgraded one at a time. - - Although the Lustre 1.8 to 2.x upgrade path has been tested, optimum performance will be seen with a freshly formatted 2.x filesystem. - - - From Lustre version 2.2, the large xattr (aka wide striping) feature is added to support up to 2000 OSTs. This feature is disabled by default at mkfs.lustre time. To upgrade from an existing filesystem to enable wide striping on the MDT, "tune2fs -O large_xattr" needs to be run on the MDT device before mounting it after the upgrade. - Then, once the wide striping feature is enabled and in use on the MDT, it will not be possible to directly downgrade the MDT filesystem to an earlier version of Lustre that does not support wide striping. The only way to disable it would be to delete all of the files with large xattrs before downgrade, then unmount the MDT, and then run "tune2fs -O ^large_xattr" to turn off this filesystem feature. - -
- <indexterm><primary>upgrading</primary><secondary>file system</secondary></indexterm>Performing a File System Upgrade - This procedure describes a file system upgrade in which Lustre 2.x packages are installed on multiple 1.8 servers and, optionally, clients, requiring a file system shutdown. You can choose to upgrade the entire Lustre file system to 2.x, or just upgrade the servers to Lustre 2.x and leave the clients running 1.8.6 or later. - - In a Lustre upgrade, the package install/update can be done either before or after the filesystem is unmount. To minimize downtime, this procedure first performs the 2.x package installation, and then unmounts the file system. - - + <indexterm> + <primary>Lustre</primary> + <secondary>upgrading</secondary> + <see>upgrading</see> + </indexterm><indexterm> + <primary>upgrading</primary> + </indexterm>Release Interoperability and Upgrade Requirements + Lustre 2.x release (major) + upgrade: - Make a complete, restorable file system backup before upgrading Lustre. The Lustre 2.x on-disk format itself is compatible with the 1.8 on-disk format, but having a backup is always important. If it is not possible to backup the full filesystem, it is still valuable to have a full device-level backup of the MDT filesystem, as described in . + All servers must be upgraded at the same time, while some or all clients may be + upgraded. - If you are planning a heterogeneous environment (1.8 clients and 2.x servers), make sure that at least version 1.8.6 is installed on clients that are not upgraded to 2.x. + All servers must be be upgraded to Red Hat* Enterprise Linux* 6 (RHEL 6) or other + compatible Linux distribution. See the Linux Test Matrix at for a + list of tested Linux distros. - Install the 2.x packages on the Lustre servers and, optionally, the clients. - All servers must be upgraded from 1.8 to 2.x at the same time. Some or all clients can be upgraded to 2.x at this time. - For help determining where to install a specific package, see . - - - Install the kernel, modules and ldiskfs packages. For example: - $ rpm -ivh -kernel-lustre-smp-ver \ -kernel-ib-ver \ -lustre-modules-ver \ -lustre-ldiskfs-ver - - - Upgrade the utilities/userspace packages. For example: - $ rpm -Uvh lustre-ver - - - If a new e2fsprogs package is available, upgrade it. For example: - $ rpm -Uvh e2fsprogs-ver - - Use e2fsprogs-1.41.90-wc3 or later, available at: - http://downloads.whamcloud.com/public/e2fsprogs/latest/ - - - If you want to add optional packages to your Lustre system, install them now. - - + Clients to be upgraded to the Lustre 2.4 release or higher must be running RHEL 6 or + other compatible Linux distribution. See the Linux Test Matrix at for a + list of tested Linux distros. - - Shut down the file system. - Shut down the components in this order: clients, then the MDT, then OSTs. Unmounting a block device causes Lustre to be shut down on that node. - - - Unmount the clients. On each client node, run: - umount -a -t lustre - - - Unmount the MDT. On the MDS node, run: - umount -a -t lustre - - - Unmount the OSTs (be sure to unmount all OSTs). On each OSS node, run: - umount -a -t lustre - - - - - Since the kernel will typically be upgraded with a 1.8 to 2.x upgrade, the nodes will need to be rebooted in order to use the new kernel. - - - Start the upgraded file system. - Start the components in this order: OSTs, then the MDT, then clients. - - - Mount the OSTs (be sure to mount all OSTs). On each OSS node, run: - oss# mount -a -t lustre - This command assumes that all OSTs are listed in the /etc/fstab file. If the OSTs are not in the /etc/fstab file, they need to be mounted individually by running the mount command: - oss# mount -t lustre /dev/block_device /mount_point - - - Mount the MDT. On the MDS node, run: - mds# mount -a -t lustre - - - Mount the file system on the clients. On each client node, run: - client# mount -a -t lustre - - - - - If you have a problem upgrading Lustre, use the wc-discuss mailing list, or file a ticket at the Intel Lustre bug tracker. -
+ + Lustre 2.X.y release (minor) + upgrade: + + + All servers must be upgraded at the same time, while some or all clients may be + upgraded. + + + Rolling upgrades are supported for minor releases allowing individual servers and + clients to be upgraded without stopping the Lustre file system. + +
-
- <indexterm><primary>upgrading</primary><secondary>multiple metadata targets</secondary></indexterm>Upgrading to multiple metadata targets - Lustre 2.4 allows separate metadata servers to serve separate sub directories. To upgrade a filesystem to Lustre 2.4 that support multiple metadata servers: - +
+ <indexterm> + <primary>upgrading</primary> + <secondary>major release (2.x to 2.x)</secondary> + </indexterm><indexterm> + <primary>wide striping</primary> + </indexterm><indexterm> + <primary>MDT</primary> + <secondary>multiple MDSx</secondary> + </indexterm>Upgrading to Lustre Release 2.x (Major Release) + The procedure for upgrading a Lustre release 2.x file system to a more recent 2.x release + of the Lustre software is described in this section. + + This procedure can also be used to upgrade Lustre release 1.8.6-wc1 or later to any + Lustre release 2.x. To upgrade other versions of 1.8.x, contact your support + provider. + + + In Lustre release 2.2, a feature has been added that allows striping + across up to 2000 OSTs. By default, this "wide striping" feature is disabled. It is + activated by setting the large-xattr option on the MDT using either + mkfs.lustre or tune2fs. For example after upgrading + an existing file system to Lustre release 2.2 or later, wide striping can be enabled by + running the following command on the MDT device before mounting + it:tune2fs -O large_xattrOnce the wide striping feature is enabled and in + use on the MDT, it is not possible to directly downgrade the MDT file system to an earlier + version of the Lustre software that does not support wide striping. The only way to disable + wide striping is to delete all files with an extended attribute + (xattr), unmount the MDT, and run the following command to + turn off the large-xattr + option:tune2fs -O ^large_xattr + + + In Lustre release 2.4, a new feature allows using multiple MDTs, which can each serve + one or more remote sub-directories in the file system. The root directory + is always located on MDT0. + Note that clients running a release prior to the Lustre 2.4 release can only see the + namespace hosted by MDT0 and will return an IO error if an attempt is made to access a + directory on another MDT. + + To upgrade a Lustre release 2.x file system to a more recent major release, complete these + steps: + + + Create a complete, restorable file system backup. + + Before installing the Lustre software, back up ALL data. The Lustre software + contains kernel modifications that interact with storage devices and may introduce + security issues and data loss if not installed, configured, or administered properly. If + a full backup of the file system is not practical, a device-level backup of the MDT file + system is recommended. See for a procedure. + + + + Shut down the file system by unmounting all clients and servers in the order shown + below (unmounting a block device causes the Lustre software to be shut down on that + node): + + + Unmount the clients. On each client node, run: + umount -a -t lustre + + + Unmount the MDT. On the MDS node, run: + umount -a -t lustre + + + Unmount all the OSTs. On each OSS node, run: + umount -a -t lustre + + + + + Upgrade the Linux operating system on all servers to RHEL 6 or other compatible + (tested) distribution and reboot. See the Linux Test Matrix at . + + + Upgrade the Linux operating system on all clients to RHEL 6 or other compatible + (tested) distribution and reboot. See the Linux Test Matrix at . + + + Download the Lustre server RPMs for your platform from the Lustre Releases + repository. See for a list of required packages. + + + Install the Lustre server packages on all Lustre servers (MGS, MDSs, and OSSs). + + + Log onto a Lustre server as the root user + + + Use the yum command to install the packages: + + # yum --nogpgcheck install pkg1.rpm pkg2.rpm ... + + + + Verify the packages are installed correctly: + + rpm -qa|egrep "lustre|wc"|sort + + + + Repeat these steps on each Lustre server. + + + + + Download the Lustre client RPMs for your platform from the Lustre Releases + repository. See for a list of required packages. + + The version of the kernel running on a Lustre client must be the same as the version + of the lustre-client-modules-ver package + being installed. If not, a compatible kernel must be installed on the client before the + Lustre client packages are installed. + + + + Install the Lustre client packages on each of the Lustre clients to be + upgraded. + + + Log onto a Lustre client as the root user. + + + Use the yum command to install the packages: + + # yum --nogpgcheck install pkg1.rpm pkg2.rpm ... + + + + Verify the packages were installed correctly: + + # rpm -qa|egrep "lustre|kernel"|sort + + + + Repeat these steps on each Lustre client. + + + + + (Optional) For upgrades to Lustre release 2.2 or higher, to enable + wide striping on an existing MDT, run the following command on the MDT + :mdt# tune2fs -O large_xattr device + + + (Optional) For upgrades to Lustre release 2.4 or higher, to format an + additional MDT, complete these steps: - Stop MGT/MDT/OST and upgrade to 2.4 + Determine the index used for the first MDT (each MDT must have unique index). + Enter:client$ lctl dl | grep mdc +36 UP mdc lustre-MDT0000-mdc-ffff88004edf3c00 4c8be054-144f-9359-b063-8477566eb84e 5 + In this example, the next available index is 1. - Format new MDT according to . + Add the new block device as a new MDT at the next available index by entering + (on one + line):mds# mkfs.lustre --reformat --fsname=filesystem_name --mdt \ + --mgsnode=mgsnode --index 1 /dev/mdt1_device + + + + + (Optional) + If you are upgrading to Lustre release 2.3, or a release previous to Lustre release 2.3, + and want to enable the quota feature, complete these steps: - Mount all of the targets according to . + Before setting up the file system, enter on both the MDS and + OSTs:tunefs.lustre --quota - After recovery is completed clients will be connected MDT0. - Clients prior to 2.4 will only be have the namespace provided by MDT0 visible and will return an IO error if a directory hosted on a remote MDT is accessed. + When setting up the file system, + enter:conf_param $FSNAME.quota.mdt=$QUOTA_TYPE +conf_param $FSNAME.quota.ost=$QUOTA_TYPE - + + + + Start the Lustre file system by starting the components in the order shown in the + following steps: + + + Mount the MGT. On the MGS, runmgs# mount -a -t lustre + + + Mount the MDT(s). On each MDT, run:mds# mount -a -t lustre + + + Mount all the OSTs. On each OSS node, run: + oss# mount -a -t lustre + + This command assumes that all the OSTs are listed in the + /etc/fstab file. OSTs that are not listed in the + /etc/fstab file, must be mounted individually by running the + mount command: + mount -t lustre /dev/block_device /mount_point + + + + Mount the file system on the clients. On each client node, run: + client# mount -a -t lustre + + + + + + The mounting order described in the steps above must be followed for the intial mount + and registration of a Lustre file system after an upgrade. For a normal start of a Lustre + file system, the mounting order is MGT, OSTs, MDT(s), clients. + + If you have a problem upgrading a Lustre file system, see for some ways + to get help. +
+
+ <indexterm> + <primary>upgrading</primary> + <secondary>2.X.y to 2.X.y (minor release)</secondary> + </indexterm><indexterm/>Upgrading to a Lustre Release 2.X.y (Minor Release) + Rolling upgrades are supported for upgrading from any Lustre release 2.X.y to a more + recent 2.X.y release. This allows the Lustre file system to continue to run while individual + servers (or their failover partners) and clients are upgraded one at a time. The procedure for + upgrading a Lustre release 2.X.y file system to a more recent minor release is described in + this section. + To upgrade Lustre release 2.X.y to a more recent minor release, complete these + steps: + + + Create a complete, restorable file system backup. + + Before installing the Lustre software, back up ALL data. The Lustre software + contains kernel modifications that interact with storage devices and may introduce + security issues and data loss if not installed, configured, or administered properly. If + a full backup of the file system is not practical, a device-level backup of the MDT file + system is recommended. See for a procedure. + + + + Download the Lustre server RPMs for your platform from the Lustre Releases + repository. See for a list of required packages. + + + For a rolling upgrade, complete any procedures required to keep the Lustre file system + running while the server to be upgraded is offline, such as failing over a primary server + to its secondary partner. + + + Unmount the Lustre server to be upgraded (MGS, MDS, or OSS) + + + Install the Lustre server packages on the Lustre server. + + + Log onto the Lustre server as the root user + + + Use the yum command to install the packages: + + # yum --nogpgcheck install pkg1.rpm pkg2.rpm ... + + + + Verify the packages are installed correctly: + + rpm -qa|egrep "lustre|wc"|sort + + + + Mount the Lustre server to restart the Lustre software on the + server:server# mount -a -t lustre + + + Repeat these steps on each Lustre server. + + + + + Download the Lustre client RPMs for your platform from the Lustre Releases + repository. See for a list of required packages. + + + Install the Lustre client packages on each of the Lustre clients to be + upgraded. + + + Log onto a Lustre client as the root user. + + + Use the yum command to install the packages: + + # yum --nogpgcheck install pkg1.rpm pkg2.rpm ... + + + + Verify the packages were installed correctly: + + # rpm -qa|egrep "lustre|kernel"|sort + + + + Mount the Lustre client to restart the Lustre software on the + client:client# mount -a -t lustre + + + Repeat these steps on each Lustre client. + + + + + If you have a problem upgrading a Lustre file system, see for some + suggestions for how to get help.
-- 1.8.3.1