Flags: 0x75
(MDT MGS first_time update )
Persistent mount opts: errors=remount-ro,iopen_nopriv,user_xattr
-Parameters: mdt.group_upcall=/usr/sbin/l_getgroups
+Parameters: mdt.identity_upcall=/usr/sbin/l_getidentity
checking for existing Lustre data: not found
device size = 16MB
<screen>[root@mds /]# mount -t lustre /dev/sdb /mnt/mdt</screen>
<para>This command generates this output:</para>
<screen>Lustre: temp-MDT0000: new disk, initializing
-Lustre: 3009:0:(lproc_mds.c:262:lprocfs_wr_group_upcall()) temp-MDT0000:
-group upcall set to /usr/sbin/l_getgroups
-Lustre: temp-MDT0000.mdt: set parameter group_upcall=/usr/sbin/l_getgroups
+Lustre: 3009:0:(lproc_mds.c:262:lprocfs_wr_identity_upcall()) temp-MDT0000:
+group upcall set to /usr/sbin/l_getidentity
+Lustre: temp-MDT0000.mdt: set parameter identity_upcall=/usr/sbin/l_getidentity
Lustre: Server temp-MDT0000 on device /dev/sdb has started </screen>
</listitem>
<listitem xml:id="dbdoclet.50438267_pgfId-1291170">
<emphasis role="italic">
<emphasis>(Required)</emphasis>
</emphasis>
- </emphasis><emphasis role="bold"> Maintain uniform user and group databases on all cluster nodes</emphasis> . Use the same user IDs (UID) and group IDs (GID) on all clients. If use of supplemental groups is required, verify that the group_upcall requirements have been met. See <xref linkend="dbdoclet.50438291_32926"/>.</para>
+ </emphasis><emphasis role="bold"> Maintain uniform user and group databases on all cluster nodes</emphasis> . Use the same user IDs (UID) and group IDs (GID) on all clients. If use of supplemental groups is required, verify that the identity_upcall requirements have been met. See <xref linkend="dbdoclet.50438291_32926"/>.</para>
</listitem>
<listitem>
<para><emphasis role="bold">
<para>With <literal>tunefs.lustre</literal>, parameters are "additive" -- new parameters are specified in addition to old parameters, they do not replace them. To erase all old <literal>tunefs.lustre</literal> parameters and just use newly-specified parameters, run:</para>
<screen>$ tunefs.lustre --erase-params --param=<new parameters> </screen>
<para>The tunefs.lustre command can be used to set any parameter settable in a /proc/fs/lustre file and that has its own OBD device, so it can be specified as <literal><obd|fsname>.<obdtype>.<proc_file_name>=<value></literal>. For example:</para>
- <screen>$ tunefs.lustre --param mdt.group_upcall=NONE /dev/sda1</screen>
+ <screen>$ tunefs.lustre --param mdt.identity_upcall=NONE /dev/sda1</screen>
<para>For more details about <literal>tunefs.lustre</literal>, see <xref linkend="systemconfigurationutilities"/>.</para>
</section>
<section xml:id="dbdoclet.50438194_51490">
<screen>lctl set_param [-n] <obdtype>.<obdname>.<proc_file_name>=<value></screen>
<para>For example:</para>
<screen># lctl set_param osc.*.max_dirty_mb=1024
-osc.myth-OST0000-osc.max_dirty_mb=32
-osc.myth-OST0001-osc.max_dirty_mb=32
-osc.myth-OST0002-osc.max_dirty_mb=32
-osc.myth-OST0003-osc.max_dirty_mb=32
+osc.myth-OST0000-osc.max_dirty_mb=32
+osc.myth-OST0001-osc.max_dirty_mb=32
+osc.myth-OST0002-osc.max_dirty_mb=32
+osc.myth-OST0003-osc.max_dirty_mb=32
osc.myth-OST0004-osc.max_dirty_mb=32</screen>
</section>
<section xml:id="dbdoclet.50438194_64195">
<screen><obd|fsname>.<obdtype>.<proc_file_name>=<value>) </screen>
<para>Here are a few examples of <literal>lctl conf_param</literal> commands:</para>
<screen>$ mgs> lctl conf_param testfs-MDT0000.sys.timeout=40
-$ lctl conf_param testfs-MDT0000.mdt.group_upcall=NONE
-$ lctl conf_param testfs.llite.max_read_ahead_mb=16
-$ lctl conf_param testfs-MDT0000.lov.stripesize=2M
-$ lctl conf_param testfs-OST0000.osc.max_dirty_mb=29.15
-$ lctl conf_param testfs-OST0000.ost.client_cache_seconds=15
+$ lctl conf_param testfs-MDT0000.mdt.identity_upcall=NONE
+$ lctl conf_param testfs.llite.max_read_ahead_mb=16
+$ lctl conf_param testfs-MDT0000.lov.stripesize=2M
+$ lctl conf_param testfs-OST0000.osc.max_dirty_mb=29.15
+$ lctl conf_param testfs-OST0000.ost.client_cache_seconds=15
$ lctl conf_param testfs.sys.timeout=40 </screen>
<caution>
<para>Parameters specified with the <literal>lctl conf_param</literal> command are set permanently in the file system's configuration file on the MGS.</para>
<para>To report current Lustre parameter values, use the <literal>lctl get_param</literal> command with this syntax:</para>
<screen>lctl get_param [-n] <obdtype>.<obdname>.<proc_file_name></screen>
<para>This example reports data on RPC service times.</para>
- <screen>$ lctl get_param -n ost.*.ost_io.timeouts
+ <screen>$ lctl get_param -n ost.*.ost_io.timeouts
service : cur 1 worst 30 (at 1257150393, 85d23h58m54s ago) 1 1 1 1 </screen>
<para>This example reports the amount of space this client has reserved for writeback cache with each OST:</para>
<screen># lctl get_param osc.*.cur_grant_bytes
<para>The command output is:</para>
<screen>debugfs 1.41.5.sun2 (23-Apr-2009)
/dev/lustre/ost_test2: catastrophic mode - not reading inode or group bitma\
-ps
+ps
Inode: 352365 Type: regular Mode: 0666 Flags: 0x80000
Generation: 1574463214 Version: 0xea020000:00000000
User: 500 Group: 500 Size: 260096
mtime: 0x4a216b48:00000000 -- Sat May 30 13:22:16 2009
crtime: 0x4a216b3c:975870dc -- Sat May 30 13:22:04 2009
Size of extra inode fields: 24
-Extended attributes stored in inode body:
+Extended attributes stored in inode body:
fid = "e2 00 11 00 00 00 00 00 25 43 c1 87 00 00 00 00 a0 88 00 00 00 00 00 \
00 00 00 00 00 00 00 00 00 " (32)
BLOCKS:
<para>Note the FID's EA and apply it to the <literal>osd_inode_id</literal> mapping.</para>
<para>In this example, the FID's EA is:</para>
<screen>e2001100000000002543c18700000000a0880000000000000000000000000000
-struct osd_inode_id {
-__u64 oii_ino; /* inode number */
-__u32 oii_gen; /* inode generation */
-__u32 oii_pad; /* alignment padding */
+struct osd_inode_id {
+__u64 oii_ino; /* inode number */
+__u32 oii_gen; /* inode generation */
+__u32 oii_pad; /* alignment padding */
};</screen>
<para>After swapping, you get an inode number of <literal>0x001100e2</literal> and generation of <literal>0</literal>.</para>
</listitem>
</section>
<section remap="h3">
<title>Description</title>
- <para>The group upcall file contains the path to an executable that, when installed, is invoked to resolve a numeric UID to a group membership list. This utility should complete the <literal>mds_grp_downcall_data</literal> data structure (see <xref linkend="dbdoclet.50438291_33759"/>) and write it to the <literal>/proc/fs/lustre/mdt/${FSNAME}-MDT{xxxx}/identity_info</literal> pseudo-file.</para>
- <para>For a sample upcall program, see <literal>lustre/utils/l_getgroups.c</literal> in the Lustre source distribution.</para>
+ <para>The group upcall file contains the path to an executable that is invoked to resolve a numeric UID to a group membership list. This utility opens <literal>/proc/fs/lustre/mdt/{mdtname}/identity_info</literal> and writes the releated <literal>identity_downcall_data</literal> data structure (see <xref linkend="dbdoclet.50438291_33759"/>). The data is persisted with <literal>lctl set_param mdt.{mdtname}.identity_info</literal>.</para>
+ <para>For a sample upcall program, see <literal>lustre/utils/l_getidentity.c</literal> in the Lustre source distribution.</para>
<section remap="h4">
<title>Primary and Secondary Groups</title>
<para>The mechanism for the primary/secondary group is as follows:</para>
<para>The default upcall is <literal>/usr/sbin/l_getidentity</literal>, which can interact with the user/group database to obtain UID/GID/suppgid. The user/group database depends on authentication configuration, and can be local <literal>/etc/passwd</literal>, NIS, LDAP, etc. If necessary, the administrator can use a parse utility to set <literal>/proc/fs/lustre/mdt/${FSNAME}-MDT{xxxx}/identity_upcall</literal>. If the upcall interface is set to NONE, then upcall is disabled. The MDS uses the UID/GID/suppgid supplied by the client.</para>
</listitem>
<listitem>
- <para>The default group upcall is set by mkfs.lustre. Use <literal>tunefs.lustre --param</literal> or <literal>echo{path}>/proc/fs/lustre/mds/{mdsname}/group_upcall</literal></para>
+ <para>The default group upcall is set by mkfs.lustre. Use <literal>tunefs.lustre --param</literal> or <literal>lctl set_param mdt.{mdtname}.identity_upcall={path}</literal></para>
</listitem>
<listitem>
<para>The Lustre administrator can specify permissions for a specific UID by configuring <literal>/etc/lustre/perm.conf</literal> on the MDS. As commented in <literal>lustre/utils/l_getidentity.c</literal></para>
</section>
</section>
<section xml:id="dbdoclet.50438291_73963">
- <title><indexterm><primary>programming</primary><secondary>l_getgroups</secondary></indexterm><literal>l_getgroups</literal> Utility</title>
- <para>The <literal>l_getgroups</literal> utility handles Lustre user/group cache upcall.</para>
+ <title><indexterm><primary>programming</primary><secondary>l_getidentity</secondary></indexterm><literal>l_getidentity</literal> Utility</title>
+ <para>The <literal>l_getidentity</literal> utility handles Lustre user/group cache upcall.</para>
<section remap="h5">
<title>Synopsis</title>
- <screen>l_getgroups [-v] [-d|mdsname] uid]
-l_getgroups [-v] -s</screen>
+ <screen>l_getidentity [-v] [-d|mdsname] uid]
+l_getidentity [-v] -s</screen>
</section>
<section remap="h5">
<title>Description</title>
- <para>The group upcall file contains the path to an executable that, when properly installed, is invoked to resolve a numeric UID to a group membership list. This utility should complete the <literal>mds_grp_downcall_data</literal> data structure (see Data structures) and write it to the <literal>/proc/fs/lustre/mds/mds-service/group_info</literal> pseudo-file.</para>
- <para>l_getgroups is the reference implementation of the user/group cache upcall.</para>
+ <para>The group upcall file contains the path to an executable that is invoked to resolve a numeric UID to a group membership list. This utility opens <literal>/proc/fs/lustre/mdt/{mdtname}/identity_info</literal> and writes the releated <literal>identity_downcall_data</literal> data structure (see <xref linkend='dbdoclet.50438291_33759'/>). The data is persisted with <literal>lctl set_param mdt.{mdtname}.identity_info</literal>.</para>
+ <para>l_getidentity is the reference implementation of the user/group cache upcall.</para>
</section>
<section remap="h5">
<title>Files</title>
- <para><literal>/proc/fs/lustre/mds/mds-service/group_upcall</literal></para>
+ <para><literal>/proc/fs/lustre/mdt/{mdt-name}/identity_upcall</literal></para>
</section>
</section>
</chapter>
<?xml version='1.0' encoding='UTF-8'?>
-<!-- This document was created with Syntext Serna Free. --><chapter xmlns="http://docbook.org/ns/docbook" xmlns:xl="http://www.w3.org/1999/xlink" version="5.0" xml:lang="en-US" xml:id="managingfilesystemio">
+<chapter xmlns="http://docbook.org/ns/docbook" xmlns:xl="http://www.w3.org/1999/xlink" version="5.0" xml:lang="en-US" xml:id="managingfilesystemio">
<title xml:id="managingfilesystemio.title">Managing the File System and I/O</title>
<para>This chapter describes file striping and I/O options, and includes the following sections:</para>
<itemizedlist>
</section>
<section remap="h5">
<title>Description</title>
- <para>The group upcall file contains the path to an executable file that, when properly installed, is invoked to resolve a numeric UID to a group membership list. This utility should complete the mds_grp_downcall_data structure and write it to the /proc/fs/lustre/mdt/${FSNAME}-MDT{xxxx}/identity_info pseudo-file.</para>
+ <para>The group upcall file contains the path to an executable file that is invoked to resolve a numeric UID to a group membership list. This utility opens <literal>/proc/fs/lustre/mdt/{mdtname}/identity_info</literal> and writes the releated <literal>identity_downcall_data</literal> structure (see <xref linkend='dbdoclet.50438291_33759'/>.) The data is persisted with <literal>lctl set_param mdt.{mdtname}.identity_info</literal>.</para>
<para>The l_getidentity utility is the reference implementation of the user or group cache upcall.</para>
</section>
<section remap="h5">
<para>Many permanent parameters can be set with lctl conf_param. In general, lctl conf_param can be used to specify any parameter settable in a /proc/fs/lustre file, with its own OBD device. The lctl conf_param command uses this syntax:</para>
<screen><obd|fsname>.<obdtype>.<proc_file_name>=<value>) </screen>
<para>For example:</para>
- <screen>$ lctl conf_param testfs-MDT0000.mdt.group_upcall=NONE
+ <screen>$ lctl conf_param testfs-MDT0000.mdt.identity_upcall=NONE
$ lctl conf_param testfs.llite.max_read_ahead_mb=16 </screen>
<caution>
<para>The lctlconf_param command permanently sets parameters in the file system configuration.</para>
<para>With tunefs.lustre, parameters are "additive" -- new parameters are specified in addition to old parameters, they do not replace them. To erase all old tunefs.lustre parameters and just use newly-specified parameters, run:</para>
<screen>$ tunefs.lustre --erase-params --param=<new parameters> </screen>
<para>The tunefs.lustre command can be used to set any parameter settable in a /proc/fs/lustre file and that has its own OBD device, so it can be specified as <obd|fsname>.<obdtype>.<proc_file_name>=<value>. For example:</para>
- <screen>$ tunefs.lustre --param mdt.group_upcall=NONE /dev/sda1</screen>
+ <screen>$ tunefs.lustre --param mdt.identity_upcall=NONE /dev/sda1</screen>
</section>
<section remap="h5">
<title>Options</title>
<?xml version='1.0' encoding='UTF-8'?>
<!-- This document was created with Syntext Serna Free. --><chapter xmlns="http://docbook.org/ns/docbook" xmlns:xl="http://www.w3.org/1999/xlink" version="5.0" xml:lang="en-US" xml:id="understandinglustrenetworking">
<title xml:id="understandinglustrenetworking.title">Understanding Lustre Networking (LNET)</title>
- <para>This chapter introduces Lustre Networking (LNET) and includes the following sections:</para>
+ <para>This chapter introduces Lustre Networking (LNET) and includes the following for intereste sections:</para>
<itemizedlist>
<listitem>
<para>
</listitem>
</itemizedlist>
<section xml:id="dbdoclet.50438191_22878">
- <title>
- <indexterm><primary>LNET</primary></indexterm>
- <indexterm><primary>LNET</primary><secondary>understanding</secondary></indexterm>
- Introducing LNET</title>
+ <title><indexterm>
+ <primary>LNET</primary>
+ </indexterm><indexterm>
+ <primary>LNET</primary>
+ <secondary>understanding</secondary>
+ </indexterm> Introducing LNET</title>
<para>In a cluster with a Lustre file system, the system network connecting the servers and the clients is implemented using Lustre Networking (LNET), which provides the communication infrastructure required by the Lustre file system.</para>
<para>LNET supports many commonly-used network types, such as InfiniBand and IP networks, and allows simultaneous availability across multiple network types with routing between them. Remote Direct Memory Access (RDMA) is permitted when supported by underlying networks using the appropriate Lustre network driver (LND). High availability and recovery features enable transparent recovery in conjunction with failover servers.</para>
<para>An LND is a pluggable driver that provides support for a particular network type. LNDs are loaded into the driver stack, with one LND for each network type in use.</para>
<para>For information about administering LNET, see <xref linkend="adminlustrepart3"/>.</para>
</section>
<section xml:id="dbdoclet.50438191_19625">
- <title><indexterm><primary>LNET</primary><secondary>features</secondary></indexterm>Key Features of LNET</title>
+ <title><indexterm>
+ <primary>LNET</primary>
+ <secondary>features</secondary>
+ </indexterm>Key Features of LNET</title>
<para>Key features of LNET include:</para>
<itemizedlist>
<listitem>
<para>Lustre can use bonded networks, such as bonded Ethernet networks, when the underlying network technology supports bonding. For more information, see <xref linkend="settingupbonding"/>.</para>
</section>
<section xml:id="dbdoclet.50438191_20721">
- <title><indexterm><primary>LNET</primary><secondary>supported networks</secondary></indexterm>Supported Network Types</title>
+ <title><indexterm>
+ <primary>LNET</primary>
+ <secondary>supported networks</secondary>
+ </indexterm>Supported Network Types</title>
<para>LNET includes LNDs to support many network types including:</para>
<itemizedlist>
<listitem>
-<?xml version="1.0" encoding="UTF-8"?>
-<book version="5.0" xml:lang="en-US" xmlns="http://docbook.org/ns/docbook" xmlns:xl="http://www.w3.org/1999/xlink" xmlns:xi="http://www.w3.org/2001/XInclude">
-
+<?xml version='1.0' encoding='UTF-8'?>
+<!-- This document was created with Syntext Serna Free. -->
+<book xmlns="http://docbook.org/ns/docbook" xmlns:xl="http://www.w3.org/1999/xlink" xmlns:xi="http://www.w3.org/2001/XInclude" version="5.0" xml:lang="en-US">
<info>
- <title>Lustre 2.x Filesystem</title>
+ <title>Lustre 2.x Filesystem</title>
<subtitle>Operations Manual</subtitle>
-
<copyright>
- <year>2010</year>
- <year>2011</year>
- <holder>Oracle and/or its affiliates. (The original version of this Operations Manual without the Whamcloud modifications.)</holder>
- </copyright>
+ <year>2010</year>
+ <year>2011</year>
+ <holder>Oracle and/or its affiliates. (The original version of this Operations Manual without the Whamcloud modifications.)</holder>
+ </copyright>
<copyright>
- <year>2011</year>
- <holder>Whamcloud, Inc. (Whamcloud modifications to the original version of this Operations Manual.)</holder>
+ <year>2011</year>
+ <holder>Whamcloud, Inc. (Whamcloud modifications to the original version of this Operations Manual.)</holder>
</copyright>
-
- <legalnotice><para>Notwithstanding Whamcloud’s ownership of the copyright in the modifications to the original version of this Operations Manual, as between Whamcloud and Oracle, Oracle and/or its affiliates retain sole ownership of the copyright in the unmodified portions of this Operations Manual.</para></legalnotice>
-
- <xi:include href="legalnoticeWhamcloud.xml" xmlns:xi="http://www.w3.org/2001/XInclude"/>
- <xi:include href="legalnoticeOracle.xml" xmlns:xi="http://www.w3.org/2001/XInclude"/>
+ <legalnotice>
+ <para>Notwithstanding Whamcloud’s ownership of the copyright in the modifications to the original version of this Operations Manual, as between Whamcloud and Oracle, Oracle and/or its affiliates retain sole ownership of the copyright in the unmodified portions of this Operations Manual.</para>
+ </legalnotice>
+ <xi:include xmlns:xi="http://www.w3.org/2001/XInclude" href="legalnoticeWhamcloud.xml"/>
+ <xi:include xmlns:xi="http://www.w3.org/2001/XInclude" href="legalnoticeOracle.xml"/>
</info>
-
-
- <xi:include href="Preface.xml" />
-
- <xi:include href="I_LustreIntro.xml" />
-
- <xi:include href="II_LustreInstallConfig.xml" />
-
- <xi:include href="III_LustreAdministration.xml" />
-
- <xi:include href="IV_LustreTuning.xml" />
-
- <xi:include href="V_LustreTroubleshooting.xml" />
-
- <xi:include href="VI_Reference.xml" />
-
- <xi:include href="Glossary.xml" />
-
- <xi:include href="ix.xml" />
+ <xi:include href="Preface.xml"/>
+ <xi:include href="I_LustreIntro.xml"/>
+ <xi:include href="II_LustreInstallConfig.xml"/>
+ <xi:include href="III_LustreAdministration.xml"/>
+ <xi:include href="IV_LustreTuning.xml"/>
+ <xi:include href="V_LustreTroubleshooting.xml"/>
+ <xi:include href="VI_Reference.xml"/>
+ <xi:include href="Glossary.xml"/>
+ <xi:include href="ix.xml"/>
</book>