From: Richard Henwood Date: Fri, 10 Feb 2012 20:50:04 +0000 (-0600) Subject: LUDOC-33: fixed identity information upcall 2.x X-Git-Tag: 2.2.0~8^2 X-Git-Url: https://git.whamcloud.com/gitweb?a=commitdiff_plain;h=refs%2Fchanges%2F30%2F2130%2F6;p=doc%2Fmanual.git LUDOC-33: fixed identity information upcall 2.x Pre 2.x, Lustre used the user identity information upcall on Lustre MDS side is through the /proc interface: /proc/fs/lustre/mds/{mdsname}/group_upcall From 2.x: interface has been replaced (since Lustre-2.0 release) by: /proc/fs/lustre/mdt/{mdtname}/identity_upcall, and this interface has been superseded by lctl set_param mdt.{mdtname}.identity_info={path}. "NONE" means no identity_upcall. In addition, the following substitutions have been made for Lustre 2.x "group_info" becomes "identity_info" "mds_grp_downcall_data" becomes "identity_downcall_data" "l_getgroups" becomes "l_getidentity" Signed-off-by: Richard Henwood Change-Id: I9db35e8882834f4df234f38f13a55fbb1d79a377 --- diff --git a/ConfiguringLustre.xml b/ConfiguringLustre.xml index 8865ff7..2d869e7 100644 --- a/ConfiguringLustre.xml +++ b/ConfiguringLustre.xml @@ -422,7 +422,7 @@ Mount type: ldiskfs Flags: 0x75 (MDT MGS first_time update ) Persistent mount opts: errors=remount-ro,iopen_nopriv,user_xattr -Parameters: mdt.group_upcall=/usr/sbin/l_getgroups +Parameters: mdt.identity_upcall=/usr/sbin/l_getidentity checking for existing Lustre data: not found device size = 16MB @@ -440,9 +440,9 @@ Writing CONFIGS/mountdata [root@mds /]# mount -t lustre /dev/sdb /mnt/mdt This command generates this output: Lustre: temp-MDT0000: new disk, initializing -Lustre: 3009:0:(lproc_mds.c:262:lprocfs_wr_group_upcall()) temp-MDT0000: -group upcall set to /usr/sbin/l_getgroups -Lustre: temp-MDT0000.mdt: set parameter group_upcall=/usr/sbin/l_getgroups +Lustre: 3009:0:(lproc_mds.c:262:lprocfs_wr_identity_upcall()) temp-MDT0000: +group upcall set to /usr/sbin/l_getidentity +Lustre: temp-MDT0000.mdt: set parameter identity_upcall=/usr/sbin/l_getidentity Lustre: Server temp-MDT0000 on device /dev/sdb has started diff --git a/InstallingLustre.xml b/InstallingLustre.xml index 36ee1a3..7c53cb4 100644 --- a/InstallingLustre.xml +++ b/InstallingLustre.xml @@ -313,7 +313,7 @@ Environmental Requirements (Required) - Maintain uniform user and group databases on all cluster nodes . Use the same user IDs (UID) and group IDs (GID) on all clients. If use of supplemental groups is required, verify that the group_upcall requirements have been met. See . + Maintain uniform user and group databases on all cluster nodes . Use the same user IDs (UID) and group IDs (GID) on all clients. If use of supplemental groups is required, verify that the identity_upcall requirements have been met. See . diff --git a/LustreOperations.xml b/LustreOperations.xml index 67bbcc7..a0c6b03 100644 --- a/LustreOperations.xml +++ b/LustreOperations.xml @@ -204,7 +204,7 @@ ossbarnode# mkfs.lustre --fsname=bar --mgsnode=mgsnode@tcp0 --ost --index=1 /dev With tunefs.lustre, parameters are "additive" -- new parameters are specified in addition to old parameters, they do not replace them. To erase all old tunefs.lustre parameters and just use newly-specified parameters, run: $ tunefs.lustre --erase-params --param=<new parameters> The tunefs.lustre command can be used to set any parameter settable in a /proc/fs/lustre file and that has its own OBD device, so it can be specified as <obd|fsname>.<obdtype>.<proc_file_name>=<value>. For example: - $ tunefs.lustre --param mdt.group_upcall=NONE /dev/sda1 + $ tunefs.lustre --param mdt.identity_upcall=NONE /dev/sda1 For more details about tunefs.lustre, see .
@@ -220,10 +220,10 @@ ossbarnode# mkfs.lustre --fsname=bar --mgsnode=mgsnode@tcp0 --ost --index=1 /dev lctl set_param [-n] <obdtype>.<obdname>.<proc_file_name>=<value> For example: # lctl set_param osc.*.max_dirty_mb=1024 -osc.myth-OST0000-osc.max_dirty_mb=32 -osc.myth-OST0001-osc.max_dirty_mb=32 -osc.myth-OST0002-osc.max_dirty_mb=32 -osc.myth-OST0003-osc.max_dirty_mb=32 +osc.myth-OST0000-osc.max_dirty_mb=32 +osc.myth-OST0001-osc.max_dirty_mb=32 +osc.myth-OST0002-osc.max_dirty_mb=32 +osc.myth-OST0003-osc.max_dirty_mb=32 osc.myth-OST0004-osc.max_dirty_mb=32
@@ -232,11 +232,11 @@ osc.myth-OST0004-osc.max_dirty_mb=32 <obd|fsname>.<obdtype>.<proc_file_name>=<value>) Here are a few examples of lctl conf_param commands: $ mgs> lctl conf_param testfs-MDT0000.sys.timeout=40 -$ lctl conf_param testfs-MDT0000.mdt.group_upcall=NONE -$ lctl conf_param testfs.llite.max_read_ahead_mb=16 -$ lctl conf_param testfs-MDT0000.lov.stripesize=2M -$ lctl conf_param testfs-OST0000.osc.max_dirty_mb=29.15 -$ lctl conf_param testfs-OST0000.ost.client_cache_seconds=15 +$ lctl conf_param testfs-MDT0000.mdt.identity_upcall=NONE +$ lctl conf_param testfs.llite.max_read_ahead_mb=16 +$ lctl conf_param testfs-MDT0000.lov.stripesize=2M +$ lctl conf_param testfs-OST0000.osc.max_dirty_mb=29.15 +$ lctl conf_param testfs-OST0000.ost.client_cache_seconds=15 $ lctl conf_param testfs.sys.timeout=40 Parameters specified with the lctl conf_param command are set permanently in the file system's configuration file on the MGS. @@ -257,7 +257,7 @@ $ lctl conf_param testfs.sys.timeout=40 To report current Lustre parameter values, use the lctl get_param command with this syntax: lctl get_param [-n] <obdtype>.<obdname>.<proc_file_name> This example reports data on RPC service times. - $ lctl get_param -n ost.*.ost_io.timeouts + $ lctl get_param -n ost.*.ost_io.timeouts service : cur 1 worst 30 (at 1257150393, 85d23h58m54s ago) 1 1 1 1 This example reports the amount of space this client has reserved for writeback cache with each OST: # lctl get_param osc.*.cur_grant_bytes @@ -369,7 +369,7 @@ s as above>; cd /mnt/mds_new; setfattr \--restore=/tmp/mdsea The command output is: debugfs 1.41.5.sun2 (23-Apr-2009) /dev/lustre/ost_test2: catastrophic mode - not reading inode or group bitma\ -ps +ps Inode: 352365 Type: regular Mode: 0666 Flags: 0x80000 Generation: 1574463214 Version: 0xea020000:00000000 User: 500 Group: 500 Size: 260096 @@ -381,7 +381,7 @@ atime: 0x4a216b48:00000000 -- Sat May 30 13:22:16 2009 mtime: 0x4a216b48:00000000 -- Sat May 30 13:22:16 2009 crtime: 0x4a216b3c:975870dc -- Sat May 30 13:22:04 2009 Size of extra inode fields: 24 -Extended attributes stored in inode body: +Extended attributes stored in inode body: fid = "e2 00 11 00 00 00 00 00 25 43 c1 87 00 00 00 00 a0 88 00 00 00 00 00 \ 00 00 00 00 00 00 00 00 00 " (32) BLOCKS: @@ -392,10 +392,10 @@ TOTAL: 64 Note the FID's EA and apply it to the osd_inode_id mapping. In this example, the FID's EA is: e2001100000000002543c18700000000a0880000000000000000000000000000 -struct osd_inode_id { -__u64 oii_ino; /* inode number */ -__u32 oii_gen; /* inode generation */ -__u32 oii_pad; /* alignment padding */ +struct osd_inode_id { +__u64 oii_ino; /* inode number */ +__u32 oii_gen; /* inode generation */ +__u32 oii_pad; /* alignment padding */ }; After swapping, you get an inode number of 0x001100e2 and generation of 0. diff --git a/LustreProgrammingInterfaces.xml b/LustreProgrammingInterfaces.xml index 0780e2c..de7d0c3 100644 --- a/LustreProgrammingInterfaces.xml +++ b/LustreProgrammingInterfaces.xml @@ -25,8 +25,8 @@
Description - The group upcall file contains the path to an executable that, when installed, is invoked to resolve a numeric UID to a group membership list. This utility should complete the mds_grp_downcall_data data structure (see ) and write it to the /proc/fs/lustre/mdt/${FSNAME}-MDT{xxxx}/identity_info pseudo-file. - For a sample upcall program, see lustre/utils/l_getgroups.c in the Lustre source distribution. + The group upcall file contains the path to an executable that is invoked to resolve a numeric UID to a group membership list. This utility opens /proc/fs/lustre/mdt/{mdtname}/identity_info and writes the releated identity_downcall_data data structure (see ). The data is persisted with lctl set_param mdt.{mdtname}.identity_info. + For a sample upcall program, see lustre/utils/l_getidentity.c in the Lustre source distribution.
Primary and Secondary Groups The mechanism for the primary/secondary group is as follows: @@ -41,7 +41,7 @@ The default upcall is /usr/sbin/l_getidentity, which can interact with the user/group database to obtain UID/GID/suppgid. The user/group database depends on authentication configuration, and can be local /etc/passwd, NIS, LDAP, etc. If necessary, the administrator can use a parse utility to set /proc/fs/lustre/mdt/${FSNAME}-MDT{xxxx}/identity_upcall. If the upcall interface is set to NONE, then upcall is disabled. The MDS uses the UID/GID/suppgid supplied by the client. - The default group upcall is set by mkfs.lustre. Use tunefs.lustre --param or echo{path}>/proc/fs/lustre/mds/{mdsname}/group_upcall + The default group upcall is set by mkfs.lustre. Use tunefs.lustre --param or lctl set_param mdt.{mdtname}.identity_upcall={path} The Lustre administrator can specify permissions for a specific UID by configuring /etc/lustre/perm.conf on the MDS. As commented in lustre/utils/l_getidentity.c @@ -95,21 +95,21 @@
- <indexterm><primary>programming</primary><secondary>l_getgroups</secondary></indexterm><literal>l_getgroups</literal> Utility - The l_getgroups utility handles Lustre user/group cache upcall. + <indexterm><primary>programming</primary><secondary>l_getidentity</secondary></indexterm><literal>l_getidentity</literal> Utility + The l_getidentity utility handles Lustre user/group cache upcall.
Synopsis - l_getgroups [-v] [-d|mdsname] uid] -l_getgroups [-v] -s + l_getidentity [-v] [-d|mdsname] uid] +l_getidentity [-v] -s
Description - The group upcall file contains the path to an executable that, when properly installed, is invoked to resolve a numeric UID to a group membership list. This utility should complete the mds_grp_downcall_data data structure (see Data structures) and write it to the /proc/fs/lustre/mds/mds-service/group_info pseudo-file. - l_getgroups is the reference implementation of the user/group cache upcall. + The group upcall file contains the path to an executable that is invoked to resolve a numeric UID to a group membership list. This utility opens /proc/fs/lustre/mdt/{mdtname}/identity_info and writes the releated identity_downcall_data data structure (see ). The data is persisted with lctl set_param mdt.{mdtname}.identity_info. + l_getidentity is the reference implementation of the user/group cache upcall.
Files - /proc/fs/lustre/mds/mds-service/group_upcall + /proc/fs/lustre/mdt/{mdt-name}/identity_upcall
diff --git a/ManagingFileSystemIO.xml b/ManagingFileSystemIO.xml index 665a5e0..8aa67c8 100644 --- a/ManagingFileSystemIO.xml +++ b/ManagingFileSystemIO.xml @@ -1,5 +1,5 @@ - + Managing the File System and I/O This chapter describes file striping and I/O options, and includes the following sections: diff --git a/SystemConfigurationUtilities.xml b/SystemConfigurationUtilities.xml index b7434f7..672b58d 100644 --- a/SystemConfigurationUtilities.xml +++ b/SystemConfigurationUtilities.xml @@ -141,7 +141,7 @@ l_getidentity
Description - The group upcall file contains the path to an executable file that, when properly installed, is invoked to resolve a numeric UID to a group membership list. This utility should complete the mds_grp_downcall_data structure and write it to the /proc/fs/lustre/mdt/${FSNAME}-MDT{xxxx}/identity_info pseudo-file. + The group upcall file contains the path to an executable file that is invoked to resolve a numeric UID to a group membership list. This utility opens /proc/fs/lustre/mdt/{mdtname}/identity_info and writes the releated identity_downcall_data structure (see .) The data is persisted with lctl set_param mdt.{mdtname}.identity_info. The l_getidentity utility is the reference implementation of the user or group cache upcall.
@@ -219,7 +219,7 @@ quit Many permanent parameters can be set with lctl conf_param. In general, lctl conf_param can be used to specify any parameter settable in a /proc/fs/lustre file, with its own OBD device. The lctl conf_param command uses this syntax: <obd|fsname>.<obdtype>.<proc_file_name>=<value>) For example: - $ lctl conf_param testfs-MDT0000.mdt.group_upcall=NONE + $ lctl conf_param testfs-MDT0000.mdt.identity_upcall=NONE $ lctl conf_param testfs.llite.max_read_ahead_mb=16 The lctlconf_param command permanently sets parameters in the file system configuration. @@ -2212,7 +2212,7 @@ tunefs.lustre With tunefs.lustre, parameters are "additive" -- new parameters are specified in addition to old parameters, they do not replace them. To erase all old tunefs.lustre parameters and just use newly-specified parameters, run: $ tunefs.lustre --erase-params --param=<new parameters> The tunefs.lustre command can be used to set any parameter settable in a /proc/fs/lustre file and that has its own OBD device, so it can be specified as <obd|fsname>.<obdtype>.<proc_file_name>=<value>. For example: - $ tunefs.lustre --param mdt.group_upcall=NONE /dev/sda1 + $ tunefs.lustre --param mdt.identity_upcall=NONE /dev/sda1
Options diff --git a/UnderstandingLustreNetworking.xml b/UnderstandingLustreNetworking.xml index 4ad24ce..60f79e2 100644 --- a/UnderstandingLustreNetworking.xml +++ b/UnderstandingLustreNetworking.xml @@ -1,7 +1,7 @@ Understanding Lustre Networking (LNET) - This chapter introduces Lustre Networking (LNET) and includes the following sections: + This chapter introduces Lustre Networking (LNET) and includes the following for intereste sections: @@ -20,10 +20,12 @@
- - <indexterm><primary>LNET</primary></indexterm> - <indexterm><primary>LNET</primary><secondary>understanding</secondary></indexterm> - Introducing LNET + <indexterm> + <primary>LNET</primary> + </indexterm><indexterm> + <primary>LNET</primary> + <secondary>understanding</secondary> + </indexterm> Introducing LNET In a cluster with a Lustre file system, the system network connecting the servers and the clients is implemented using Lustre Networking (LNET), which provides the communication infrastructure required by the Lustre file system. LNET supports many commonly-used network types, such as InfiniBand and IP networks, and allows simultaneous availability across multiple network types with routing between them. Remote Direct Memory Access (RDMA) is permitted when supported by underlying networks using the appropriate Lustre network driver (LND). High availability and recovery features enable transparent recovery in conjunction with failover servers. An LND is a pluggable driver that provides support for a particular network type. LNDs are loaded into the driver stack, with one LND for each network type in use. @@ -31,7 +33,10 @@ For information about administering LNET, see .
- <indexterm><primary>LNET</primary><secondary>features</secondary></indexterm>Key Features of LNET + <indexterm> + <primary>LNET</primary> + <secondary>features</secondary> + </indexterm>Key Features of LNET Key features of LNET include: @@ -51,7 +56,10 @@ Lustre can use bonded networks, such as bonded Ethernet networks, when the underlying network technology supports bonding. For more information, see .
- <indexterm><primary>LNET</primary><secondary>supported networks</secondary></indexterm>Supported Network Types + <indexterm> + <primary>LNET</primary> + <secondary>supported networks</secondary> + </indexterm>Supported Network Types LNET includes LNDs to support many network types including: diff --git a/index.xml b/index.xml index f00ec9a..d2a1220 100644 --- a/index.xml +++ b/index.xml @@ -1,42 +1,31 @@ - - - + + + - Lustre 2.x Filesystem + Lustre 2.x Filesystem Operations Manual - - 2010 - 2011 - Oracle and/or its affiliates. (The original version of this Operations Manual without the Whamcloud modifications.) - + 2010 + 2011 + Oracle and/or its affiliates. (The original version of this Operations Manual without the Whamcloud modifications.) + - 2011 - Whamcloud, Inc. (Whamcloud modifications to the original version of this Operations Manual.) + 2011 + Whamcloud, Inc. (Whamcloud modifications to the original version of this Operations Manual.) - - Notwithstanding Whamcloud’s ownership of the copyright in the modifications to the original version of this Operations Manual, as between Whamcloud and Oracle, Oracle and/or its affiliates retain sole ownership of the copyright in the unmodified portions of this Operations Manual. - - - + + Notwithstanding Whamcloud’s ownership of the copyright in the modifications to the original version of this Operations Manual, as between Whamcloud and Oracle, Oracle and/or its affiliates retain sole ownership of the copyright in the unmodified portions of this Operations Manual. + + + - - - - - - - - - - - - - - - - - - - + + + + + + + + +