1 <?xml version='1.0' encoding='UTF-8'?>
2 <chapter xmlns="http://docbook.org/ns/docbook"
3 xmlns:xl="http://www.w3.org/1999/xlink" version="5.0" xml:lang="en-US"
4 xml:id="managingsecurity">
5 <title xml:id="managingsecurity.title">Managing Security in a Lustre File System</title>
6 <para>This chapter describes security features of the Lustre file system and
7 includes the following sections:</para>
10 <para><xref linkend="managingSecurity.acl"/></para>
13 <para><xref linkend="managingSecurity.root_squash"/></para>
16 <para><xref linkend="managingSecurity.isolation"/></para>
19 <para><xref linkend="managingSecurity.sepol"/></para>
22 <para><xref linkend="managingSecurity.clientencryption"/></para>
25 <para><xref linkend="managingSecurity.kerberos"/></para>
28 <section xml:id="managingSecurity.acl">
29 <title><indexterm><primary>Access Control List (ACL)</primary></indexterm>
31 <para>An access control list (ACL), is a set of data that informs an
32 operating system about permissions or access rights that each user or
33 group has to specific system objects, such as directories or files. Each
34 object has a unique security attribute that identifies users who have
35 access to it. The ACL lists each object and user access privileges such as
36 read, write or execute.</para>
37 <section xml:id="managingSecurity.acl.howItWorks" remap="h3">
38 <title><indexterm><primary>Access Control List (ACL)</primary><secondary>
39 how they work</secondary></indexterm>How ACLs Work</title>
40 <para>Implementing ACLs varies between operating systems. Systems that
41 support the Portable Operating System Interface (POSIX) family of
42 standards share a simple yet powerful file system permission model,
43 which should be well-known to the Linux/UNIX administrator. ACLs add
44 finer-grained permissions to this model, allowing for more complicated
45 permission schemes. For a detailed explanation of ACLs on a Linux
46 operating system, refer to the SUSE Labs article
47 <link xl:href="https://www.usenix.org/legacyurl/posix-access-control-lists-linux">
48 Posix Access Control Lists on Linux</link>.</para>
49 <para>We have implemented ACLs according to this model. The Lustre
50 software works with the standard Linux ACL tools, setfacl, getfacl, and
51 the historical chacl, normally installed with the ACL package.</para>
53 <para>ACL support is a system-range feature, meaning that all clients
54 have ACL enabled or not. You cannot specify which clients should
58 <section xml:id="managingSecurity.acl.using" remap="h3">
60 <primary>Access Control List (ACL)</primary>
61 <secondary>using</secondary>
62 </indexterm>Using ACLs with the Lustre Software</title>
63 <para>POSIX Access Control Lists (ACLs) can be used with the Lustre
64 software. An ACL consists of file entries representing permissions based
65 on standard POSIX file system object permissions that define three
66 classes of user (owner, group and other). Each class is associated with
67 a set of permissions [read (r), write (w) and execute (x)].</para>
70 <para>Owner class permissions define access privileges of the file
74 <para>Group class permissions define access privileges of the owning
78 <para>Other class permissions define access privileges of all users
79 not in the owner or group class.</para>
82 <para>The <literal>ls -l</literal> command displays the owner, group, and
83 other class permissions in the first column of its output (for example,
84 <literal>-rw-r- --</literal> for a regular file with read and write
85 access for the owner class, read access for the group class, and no
86 access for others).</para>
87 <para>Minimal ACLs have three entries. Extended ACLs have more than the
88 three entries. Extended ACLs also contain a mask entry and may contain
89 any number of named user and named group entries.</para>
90 <para>The MDS needs to be configured to enable ACLs. Use
91 <literal>--mountfsoptions</literal> to enable ACLs when creating your
93 <screen>$ mkfs.lustre --fsname spfs --mountfsoptions=acl --mdt -mgs /dev/sda</screen>
94 <para>Alternately, you can enable ACLs at run time by using the
95 <literal>--acl</literal> option with <literal>mkfs.lustre</literal>:
97 <screen>$ mount -t lustre -o acl /dev/sda /mnt/mdt</screen>
98 <para>To check ACLs on the MDS:</para>
99 <screen>$ lctl get_param -n mdc.home-MDT0000-mdc-*.connect_flags | grep acl acl</screen>
100 <para>To mount the client with no ACLs:</para>
101 <screen>$ mount -t lustre -o noacl ibmds2@o2ib:/home /home</screen>
102 <para>ACLs are enabled in a Lustre file system on a system-wide basis;
103 either all clients enable ACLs or none do. Activating ACLs is controlled
104 by MDS mount options <literal>acl</literal> / <literal>noacl</literal>
105 (enable/disable ACLs). Client-side mount options acl/noacl are ignored.
106 You do not need to change the client configuration, and the
107 'acl' string will not appear in the client /etc/mtab. The
108 client acl mount option is no longer needed. If a client is mounted with
109 that option, then this message appears in the MDS syslog:</para>
110 <screen>...MDS requires ACL support but client does not</screen>
111 <para>The message is harmless but indicates a configuration issue, which
112 should be corrected.</para>
113 <para>If ACLs are not enabled on the MDS, then any attempts to reference
114 an ACL on a client return an Operation not supported error.</para>
116 <section xml:id="managingSecurity.acl.examples" remap="h3">
118 <primary>Access Control List (ACL)</primary>
119 <secondary>examples</secondary>
120 </indexterm>Examples</title>
121 <para>These examples are taken directly from the POSIX paper referenced
122 above. ACLs on a Lustre file system work exactly like ACLs on any Linux
123 file system. They are manipulated with the standard tools in the
124 standard manner. Below, we create a directory and allow a specific user
126 <screen>[root@client lustre]# umask 027
127 [root@client lustre]# mkdir rain
128 [root@client lustre]# ls -ld rain
129 drwxr-x--- 2 root root 4096 Feb 20 06:50 rain
130 [root@client lustre]# getfacl rain
138 [root@client lustre]# setfacl -m user:chirag:rwx rain
139 [root@client lustre]# ls -ld rain
140 drwxrwx---+ 2 root root 4096 Feb 20 06:50 rain
141 [root@client lustre]# getfacl --omit-header rain
149 <section xml:id="managingSecurity.root_squash">
151 <primary>root squash</primary>
152 </indexterm>Using Root Squash</title>
153 <para>Root squash is a security feature which restricts super-user access
154 rights to a Lustre file system. Without the root squash feature enabled,
155 Lustre file system users on untrusted clients could access or modify files
156 owned by root on the file system, including deleting them. Using the root
157 squash feature restricts file access/modifications as the root user to
158 only the specified clients. Note, however, that this does
159 <emphasis>not</emphasis> prevent users on insecure clients from accessing
160 files owned by <emphasis>other</emphasis> users.</para>
161 <para>The root squash feature works by re-mapping the user ID (UID) and
162 group ID (GID) of the root user to a UID and GID specified by the system
163 administrator, via the Lustre configuration management server (MGS). The
164 root squash feature also enables the Lustre file system administrator to
165 specify a set of client for which UID/GID re-mapping does not apply.
167 <note><para>Nodemaps (<xref linkend="lustrenodemap.title" />) are an
168 alternative to root squash, since it also allows root squash on a per-client
169 basis. With UID maps, the clients can even have a local root UID without
170 actually having root access to the filesystem itself.</para></note>
171 <section xml:id="managingSecurity.root_squash.config" remap="h3">
173 <primary>root squash</primary>
174 <secondary>configuring</secondary>
175 </indexterm>Configuring Root Squash</title>
176 <para>Root squash functionality is managed by two configuration
177 parameters, <literal>root_squash</literal> and
178 <literal>nosquash_nids</literal>.</para>
181 <para>The <literal>root_squash</literal> parameter specifies the UID
182 and GID with which the root user accesses the Lustre file system.
186 <para>The <literal>nosquash_nids</literal> parameter specifies the set
187 of clients to which root squash does not apply. LNet NID range
188 syntax is used for this parameter (see the NID range syntax rules
189 described in <xref linkend="managingSecurity.root_squash"/>). For
193 <screen>nosquash_nids=172.16.245.[0-255/2]@tcp</screen>
194 <para>In this example, root squash does not apply to TCP clients on subnet
195 172.16.245.0 that have an even number as the last component of their IP
198 <section xml:id="managingSecurity.root_squash.tuning">
200 <primary>root squash</primary><secondary>enabling</secondary>
201 </indexterm>Enabling and Tuning Root Squash</title>
202 <para>The default value for <literal>nosquash_nids</literal> is NULL,
203 which means that root squashing applies to all clients. Setting the root
204 squash UID and GID to 0 turns root squash off.</para>
205 <para>Root squash parameters can be set when the MDT is created
206 (<literal>mkfs.lustre --mdt</literal>). For example:</para>
207 <screen>mds# mkfs.lustre --reformat --fsname=testfs --mdt --mgs \
208 --param "mdt.root_squash=500:501" \
209 --param "mdt.nosquash_nids='0@elan1 192.168.1.[10,11]'" /dev/sda1</screen>
210 <para>Root squash parameters can also be changed on an unmounted device
211 with <literal>tunefs.lustre</literal>. For example:</para>
212 <screen>tunefs.lustre --param "mdt.root_squash=65534:65534" \
213 --param "mdt.nosquash_nids=192.168.0.13@tcp0" /dev/sda1
215 <para>Root squash parameters can also be changed with the
216 <literal>lctl conf_param</literal> command. For example:</para>
217 <screen>mgs# lctl conf_param testfs.mdt.root_squash="1000:101"
218 mgs# lctl conf_param testfs.mdt.nosquash_nids="*@tcp"</screen>
219 <para>To retrieve the current root squash parameter settings, the
220 following <literal>lctl get_param</literal> commands can be used:</para>
221 <screen>mgs# lctl get_param mdt.*.root_squash
222 mgs# lctl get_param mdt.*.nosquash_nids</screen>
224 <para>When using the lctl conf_param command, keep in mind:</para>
227 <para><literal>lctl conf_param</literal> must be run on a live MGS
231 <para><literal>lctl conf_param</literal> causes the parameter to
232 change on all MDSs</para>
235 <para><literal>lctl conf_param</literal> is to be used once per a
240 <para>The root squash settings can also be changed temporarily with
241 <literal>lctl set_param</literal> or persistently with
242 <literal>lctl set_param -P</literal>. For example:</para>
243 <screen>mgs# lctl set_param mdt.testfs-MDT0000.root_squash="1:0"
244 mgs# lctl set_param -P mdt.testfs-MDT0000.root_squash="1:0"</screen>
245 <para>The <literal>nosquash_nids</literal> list can be cleared with:</para>
246 <screen>mgs# lctl conf_param testfs.mdt.nosquash_nids="NONE"</screen>
248 <screen>mgs# lctl conf_param testfs.mdt.nosquash_nids="clear"</screen>
249 <para>If the <literal>nosquash_nids</literal> value consists of several
250 NID ranges (e.g. <literal>0@elan</literal>, <literal>1@elan1</literal>),
251 the list of NID ranges must be quoted with single (') or double
252 ('') quotation marks. List elements must be separated with a
253 space. For example:</para>
254 <screen>mds# mkfs.lustre ... --param "mdt.nosquash_nids='0@elan1 1@elan2'" /dev/sda1
255 lctl conf_param testfs.mdt.nosquash_nids="24@elan 15@elan1"</screen>
256 <para>These are examples of incorrect syntax:</para>
257 <screen>mds# mkfs.lustre ... --param "mdt.nosquash_nids=0@elan1 1@elan2" /dev/sda1
258 lctl conf_param testfs.mdt.nosquash_nids=24@elan 15@elan1</screen>
259 <para>To check root squash parameters, use the lctl get_param command:
261 <screen>mds# lctl get_param mdt.testfs-MDT0000.root_squash
262 lctl get_param mdt.*.nosquash_nids</screen>
264 <para>An empty nosquash_nids list is reported as NONE.</para>
267 <section xml:id="managingSecurity.root_squash.tips" remap="h3">
269 <primary>root squash</primary>
270 <secondary>tips</secondary>
271 </indexterm>Tips on Using Root Squash</title>
272 <para>Lustre configuration management limits root squash in several ways.
276 <para>The <literal>lctl conf_param</literal> value overwrites the
277 parameter's previous value. If the new value uses an incorrect
278 syntax, then the system continues with the old parameters and the
279 previously-correct value is lost on remount. That is, be careful
280 doing root squash tuning.</para>
283 <para><literal>mkfs.lustre</literal> and
284 <literal>tunefs.lustre</literal> do not perform parameter syntax
285 checking. If the root squash parameters are incorrect, they are
286 ignored on mount and the default values are used instead.</para>
289 <para>Root squash parameters are parsed with rigorous syntax checking.
290 The root_squash parameter should be specified as
291 <literal><decnum>:<decnum></literal>. The
292 <literal>nosquash_nids</literal> parameter should follow LNet NID
293 range list syntax.</para>
296 <para>LNet NID range syntax:</para>
297 <screen><nidlist> :== <nidrange> [ ' ' <nidrange> ]
298 <nidrange> :== <addrrange> '@' <net>
299 <addrrange> :== '*' |
300 <ipaddr_range> |
301 <numaddr_range>
302 <ipaddr_range> :==
303 <numaddr_range>.<numaddr_range>.<numaddr_range>.<numaddr_range>
304 <numaddr_range> :== <number> |
306 <expr_list> :== '[' <range_expr> [ ',' <range_expr>] ']'
307 <range_expr> :== <number> |
308 <number> '-' <number> |
309 <number> '-' <number> '/' <number>
310 <net> :== <netname> | <netname><number>
311 <netname> :== "lo" | "tcp" | "o2ib"
312 | "ra" | "elan"
313 <number> :== <nonnegative decimal> | <hexadecimal></screen>
315 <para>For networks using numeric addresses (e.g. elan), the address
316 range must be specified in the
317 <literal><numaddr_range></literal> syntax. For networks using
318 IP addresses, the address range must be in the
319 <literal><ipaddr_range></literal>. For example, if elan is using
320 numeric addresses, <literal>1.2.3.4@elan</literal> is incorrect.
325 <section xml:id="managingSecurity.isolation">
326 <title><indexterm><primary>Isolation</primary></indexterm>
327 Isolating Clients to a Sub-directory Tree</title>
328 <para>Isolation is the Lustre implementation of the generic concept of
329 multi-tenancy, which aims at providing separated namespaces from a single
330 filesystem. Lustre Isolation enables different populations of users on
331 the same file system beyond normal Unix permissions/ACLs, even when users
332 on the clients may have root access. Those tenants share the same file
333 system, but they are isolated from each other: they cannot access or even
334 see each other’s files, and are not aware that they are sharing common
335 file system resources.</para>
336 <para>Lustre Isolation leverages the Fileset feature
337 (<xref linkend="SystemConfigurationUtilities.fileset" />)
338 to mount only a subdirectory of the filesystem rather than the root
340 In order to achieve isolation, the subdirectory mount, which presents to
341 tenants only their own fileset, has to be imposed to the clients. To that
342 extent, we make use of the nodemap feature
343 (<xref linkend="lustrenodemap.title" />). We group all clients used by a
344 tenant under a common nodemap entry, and we assign to this nodemap entry
345 the fileset to which the tenant is restricted.</para>
346 <section xml:id="managingSecurity.isolation.clientid" remap="h3">
347 <title><indexterm><primary>Isolation</primary><secondary>
348 client identification</secondary></indexterm>Identifying Clients</title>
349 <para>Enforcing multi-tenancy on Lustre relies on the ability to properly
350 identify the client nodes used by a tenant, and trust those identities.
351 This can be achieved by having physical hardware and/or network
352 security, so that client nodes have well-known NIDs. It is also possible
353 to make use of strong authentication with Kerberos or Shared-Secret Key
354 (see <xref linkend="lustressk" />).
355 Kerberos prevents NID spoofing, as every client needs its own
356 credentials, based on its NID, in order to connect to the servers.
357 Shared-Secret Key also prevents tenant impersonation, because keys
358 can be linked to a specific nodemap. See
359 <xref linkend="ssknodemaprole" /> for detailed explanations.
362 <section xml:id="managingSecurity.isolation.configuring" remap="h3">
363 <title><indexterm><primary>Isolation</primary><secondary>
364 configuring</secondary></indexterm>Configuring Isolation</title>
365 <para>Isolation on Lustre can be achieved by setting the
366 <literal>fileset</literal> parameter on a nodemap entry. All clients
367 belonging to this nodemap entry will automatically mount this fileset
368 instead of the root directory. For example:</para>
369 <screen>mgs# lctl nodemap_set_fileset --name tenant1 --fileset '/dir1'</screen>
370 <para>So all clients matching the <literal>tenant1</literal> nodemap will
371 be automatically presented the fileset <literal>/dir1</literal> when
372 mounting. This means these clients are doing an implicit subdirectory
373 mount on the subdirectory <literal>/dir1</literal>.
377 If subdirectory defined as fileset does not exist on the file system,
378 it will prevent any client belonging to the nodemap from mounting
382 <para>To delete the fileset parameter, just set it to an empty string:
384 <screen>mgs# lctl nodemap_set_fileset --name tenant1 --fileset ''</screen>
386 <section xml:id="managingSecurity.isolation.permanent" remap="h3">
387 <title><indexterm><primary>Isolation</primary><secondary>
388 making permanent</secondary></indexterm>Making Isolation Permanent
390 <para>In order to make isolation permanent, the fileset parameter on the
391 nodemap has to be set with <literal>lctl set_param</literal> with the
392 <literal>-P</literal> option.</para>
393 <screen>mgs# lctl set_param nodemap.tenant1.fileset=/dir1
394 mgs# lctl set_param -P nodemap.tenant1.fileset=/dir1</screen>
395 <para>This way the fileset parameter will be stored in the Lustre config
396 logs, letting the servers retrieve the information after a restart.
400 <section xml:id="managingSecurity.sepol" condition='l2D'>
401 <title><indexterm><primary>selinux policy check</primary></indexterm>
402 Checking SELinux Policy Enforced by Lustre Clients</title>
403 <para>SELinux provides a mechanism in Linux for supporting Mandatory Access
404 Control (MAC) policies. When a MAC policy is enforced, the operating
405 system’s (OS) kernel defines application rights, firewalling applications
406 from compromising the entire system. Regular users do not have the ability to
407 override the policy.</para>
408 <para>One purpose of SELinux is to protect the
409 <emphasis role="bold">OS</emphasis> from privilege escalation. To that
410 extent, SELinux defines confined and unconfined domains for processes and
411 users. Each process, user, file is assigned a security context, and
412 rules define the allowed operations by processes and users on files.
414 <para>Another purpose of SELinux can be to protect
415 <emphasis role="bold">data</emphasis> sensitivity, thanks to Multi-Level
416 Security (MLS). MLS works on top of SELinux, by defining the concept of
417 security levels in addition to domains. Each process, user and file is
418 assigned a security level, and the model states that processes and users
419 can read the same or lower security level, but can only write to their own
420 or higher security level.
422 <para>From a file system perspective, the security context of files must be
423 stored permanently. Lustre makes use of the
424 <literal>security.selinux</literal> extended attributes on files to hold
425 this information. Lustre supports SELinux on the client side. All you have
426 to do to have MAC and MLS on Lustre is to enforce the appropriate SELinux
427 policy (as provided by the Linux distribution) on all Lustre clients. No
428 SELinux is required on Lustre servers.
430 <para>Because Lustre is a distributed file system, the specificity when
431 using MLS is that Lustre really needs to make sure data is always accessed
432 by nodes with the SELinux MLS policy properly enforced. Otherwise, data is
433 not protected. This means Lustre has to check that SELinux is properly
434 enforced on client side, with the right, unaltered policy. And if SELinux
435 is not enforced as expected on a client, the server denies its access to
438 <section xml:id="managingSecurity.sepol.determining" remap="h3">
439 <title><indexterm><primary>selinux policy check</primary><secondary>
440 determining</secondary></indexterm>Determining SELinux Policy Info
442 <para>A string that represents the SELinux Status info will be used by
443 servers as a reference, to check if clients are enforcing SELinux
444 properly. This reference string can be obtained on a client node known
445 to enforce the right SELinux policy, by calling the
446 <literal>l_getsepol</literal> command line utility:</para>
447 <screen>client# l_getsepol
448 SELinux status info: 1:mls:31:40afb76d077c441b69af58cccaaa2ca63641ed6e21b0a887dc21a684f508b78f</screen>
449 <para>The string describing the SELinux policy has the following
451 <para><literal>mode:name:version:hash</literal></para>
455 <para><literal>mode</literal> is a digit telling if SELinux is in
456 Permissive mode (0) or Enforcing mode (1)</para>
459 <para><literal>name</literal> is the name of the SELinux policy
463 <para><literal>version</literal> is the version of the SELinux
467 <para><literal>hash</literal> is the computed hash of the binary
468 representation of the policy, as exported in
469 /etc/selinux/<literal>name</literal>/policy/policy.
470 <literal>version</literal></para>
474 <section xml:id="managingSecurity.sepol.configuring" remap="h3">
475 <title><indexterm><primary>selinux policy check</primary><secondary>
476 enforcing</secondary></indexterm>Enforcing SELinux Policy Check</title>
477 <para>SELinux policy check can be enforced by setting the
478 <literal>sepol</literal> parameter on a nodemap entry. All clients
479 belonging to this nodemap entry must enforce the SELinux policy
480 described by this parameter, otherwise they are denied access to the
481 Lustre file system. For example:</para>
482 <screen>mgs# lctl nodemap_set_sepol --name restricted
483 --sepol '1:mls:31:40afb76d077c441b69af58cccaaa2ca63641ed6e21b0a887dc21a684f508b78f'</screen>
484 <para>So all clients matching the <literal>restricted</literal> nodemap
485 must enforce the SELinux policy which description matches
486 <literal>1:mls:31:40afb76d077c441b69af58cccaaa2ca63641ed6e21b0a887dc21a684f508b78f</literal>.
487 If not, they will get Permission Denied when trying to mount or access
488 files on the Lustre file system.</para>
489 <para>To delete the <literal>sepol</literal> parameter, just set it to an
491 <screen>mgs# lctl nodemap_set_sepol --name restricted --sepol ''</screen>
492 <para>See <xref linkend="lustrenodemap.title" /> for more details about
493 the Nodemap feature.</para>
495 <section xml:id="managingSecurity.sepol.permanent" remap="h3">
496 <title><indexterm><primary>selinux policy check</primary><secondary>
497 making permanent</secondary></indexterm>Making SELinux Policy Check
499 <para>In order to make SELinux Policy check permanent, the sepol parameter
500 on the nodemap has to be set with <literal>lctl set_param</literal> with
501 the <literal>-P</literal> option.</para>
502 <screen>mgs# lctl set_param nodemap.restricted.sepol=1:mls:31:40afb76d077c441b69af58cccaaa2ca63641ed6e21b0a887dc21a684f508b78f
503 mgs# lctl set_param -P nodemap.restricted.sepol=1:mls:31:40afb76d077c441b69af58cccaaa2ca63641ed6e21b0a887dc21a684f508b78f</screen>
504 <para>This way the sepol parameter will be stored in the Lustre config
505 logs, letting the servers retrieve the information after a restart.
508 <section xml:id="managingSecurity.sepol.client" remap="h3">
509 <title><indexterm><primary>selinux policy check</primary><secondary>
510 sending client</secondary></indexterm>Sending SELinux Status Info from
512 <para>In order for Lustre clients to send their SELinux status
513 information, in case SELinux is enabled locally, the
514 <literal>send_sepol</literal> ptlrpc kernel module's parameter has to be
515 set to a non-zero value. <literal>send_sepol</literal> accepts various
519 <para>0: do not send SELinux policy info;</para>
522 <para>-1: fetch SELinux policy info for every request;</para>
525 <para>N > 0: only fetch SELinux policy info every N seconds. Use
526 <literal>N = 2^31-1</literal> to have SELinux policy info
527 fetched only at mount time.</para>
530 <para>Clients that are part of a nodemap on which
531 <literal>sepol</literal> is defined must send SELinux status info.
532 And the SELinux policy they enforce must match the representation
533 stored into the nodemap. Otherwise they will be denied access to the
534 Lustre file system.</para>
537 <section xml:id="managingSecurity.clientencryption" condition='l2E'>
538 <title><indexterm><primary>Client-side encryption</primary></indexterm>
539 Encrypting files and directories</title>
540 <para>The purpose that client-side encryption wants to serve is to be able
541 to provide a special directory for each user, to safely store sensitive
542 files. The goals are to protect data in transit between clients and
543 servers, and protect data at rest.</para>
544 <para>This feature is implemented directly at the Lustre client level.
545 Lustre client-side encryption relies on kernel <literal>fscrypt</literal>.
546 <literal>fscrypt</literal> is a library which filesystems can hook into to
547 support transparent encryption of files and directories. As a consequence,
548 the key points described below are extracted from
549 <literal>fscrypt</literal> documentation.</para>
550 <para>For full details, please refer to documentation available with the
551 Lustre sources, under the
552 <literal>Documentation/client_side_encryption</literal> directory.
554 <note><para>The client-side encryption feature is available natively on
555 Lustre clients running a Linux distribution with at least kernel 5.4. It
556 is also available thanks to an additional kernel library provided by
557 Lustre, on clients that run a Linux distribution with basic support for
558 encryption, including:</para>
560 <listitem><para>CentOS/RHEL 8.1 and later;</para></listitem>
561 <listitem><para>Ubuntu 18.04 and later;</para></listitem>
562 <listitem><para>SLES 15 SP2 and later.</para></listitem>
565 <section xml:id="managingSecurity.clientencryption.semantics" remap="h3">
566 <title><indexterm><primary>encryption access semantics</primary>
567 </indexterm>Client-side encryption access semantics</title>
568 <para>Only Lustre clients need access to encryption master keys. Keys are
569 added to the filesystem-level encryption keyring on the Lustre client.
572 <para><emphasis role="bold">With the key</emphasis></para>
573 <para>With the encryption key, encrypted regular files, directories,
574 and symlinks behave very similarly to their unencrypted
575 counterparts --- after all, the encryption is intended to be
576 transparent. However, astute users may notice some differences in
580 <para>Unencrypted files, or files encrypted with a different
581 encryption policy (i.e. different key, modes, or flags),
582 cannot be renamed or linked into an encrypted directory.
583 However, encrypted files can be renamed within an encrypted
584 directory, or into an unencrypted directory.</para>
585 <note><para>"moving" an unencrypted file into an encrypted
586 directory, e.g. with the <literal>mv</literal> program, is
587 implemented in userspace by a copy followed by a delete. Be
588 aware the original unencrypted data may remain recoverable
589 from free space on the disk; it is best to keep all files
590 encrypted from the very beginning.</para></note>
592 <listitem><para>On Lustre, Direct I/O is supported for encrypted
595 <listitem><para>The <literal>fallocate()</literal> operations
596 <literal>FALLOC_FL_COLLAPSE_RANGE</literal>,
597 <literal>FALLOC_FL_INSERT_RANGE</literal>, and
598 <literal>FALLOC_FL_ZERO_RANGE</literal> are not
599 supported on encrypted files and will fail with
600 <literal>EOPNOTSUPP</literal>.
603 <listitem><para>DAX (Direct Access) is not supported on encrypted
606 <listitem><para><literal>mmap</literal> is supported. This is
607 possible because the pagecache for an encrypted file contains
608 the plaintext, not the ciphertext.</para>
613 <para><emphasis role="bold">Without the key</emphasis></para>
614 <para>Some filesystem operations may be performed on encrypted
615 regular files, directories, and symlinks even before their
616 encryption key has been added, or after their encryption key has
620 <para>File metadata may be read, e.g. using
621 <literal>stat()</literal>.</para>
624 <para>Directories may be listed, and the whole namespace tree
625 may be walked through.
629 <para>Files may be deleted. That is, nondirectory files may be
630 deleted with <literal>unlink()</literal> as usual, and empty
631 directories may be deleted with <literal>rmdir()</literal> as
632 usual. Therefore, <literal>rm</literal> and
633 <literal>rm -r</literal> will work as expected.</para>
636 <para>Symlink targets may be read and followed, but they will
637 be presented in encrypted form, similar to filenames in
638 directories. Hence, they are unlikely to point to anywhere
642 <para>Without the key, regular files cannot be opened or truncated.
643 Attempts to do so will fail with <literal>ENOKEY</literal>. This
644 implies that any regular file operations that require a file
645 descriptor, such as <literal>read()</literal>,
646 <literal>write()</literal>, <literal>mmap()</literal>,
647 <literal>fallocate()</literal>, and <literal>ioctl()</literal>,
648 are also forbidden.</para>
649 <para>Also without the key, files of any type (including
650 directories) cannot be created or linked into an encrypted
651 directory, nor can a name in an encrypted directory be the source
652 or target of a rename, nor can an <literal>O_TMPFILE</literal>
653 temporary file be created in an encrypted directory. All such
654 operations will fail with <literal>ENOKEY</literal>.</para>
655 <para>It is not currently possible to backup and restore encrypted
656 files without the encryption key. This would require special
657 APIs which have not yet been implemented.</para>
660 <para><emphasis role="bold">Encryption policy enforcement
662 <para>After an encryption policy has been set on a directory, all
663 regular files, directories, and symbolic links created in that
664 directory (recursively) will inherit that encryption policy.
665 Special files --- that is, named pipes, device nodes, and UNIX
666 domain sockets --- will not be encrypted.</para>
667 <para>Except for those special files, it is forbidden to have
668 unencrypted files, or files encrypted with a different encryption
669 policy, in an encrypted directory tree.</para>
674 <section xml:id="managingSecurity.clientencryption.keyhierarchy" remap="h3">
675 <title><indexterm><primary>encryption key hierarchy</primary>
676 </indexterm>Client-side encryption key hierarchy</title>
677 <para>Each encrypted directory tree is protected by a master key.</para>
678 <para>To "unlock" an encrypted directory tree, userspace must provide the
679 appropriate master key. There can be any number of master keys, each
680 of which protects any number of directory trees on any number of
683 <section xml:id="managingSecurity.clientencryption.modes" remap="h3">
684 <title><indexterm><primary>encryption modes usage</primary>
685 </indexterm>Client-side encryption modes and usage</title>
686 <para><literal>fscrypt</literal> allows one encryption mode to be
687 specified for file contents and one encryption mode to be specified for
688 filenames. Different directory trees are permitted to use different
689 encryption modes. Currently, the following pairs of encryption modes are
693 <para>AES-256-XTS for contents and AES-256-CTS-CBC for filenames
697 <para>AES-128-CBC for contents and AES-128-CTS-CBC for filenames
701 <para>If unsure, you should use the (AES-256-XTS, AES-256-CTS-CBC) pair.
703 <warning><para>In Lustre 2.14, client-side encryption only supports
704 content encryption, and not filename encryption. As a consequence, only
705 content encryption mode will be taken into account, and filename
706 encryption mode will be ignored to leave filenames in clear text.</para>
709 <section xml:id="managingSecurity.clientencryption.threatmodel" remap="h3">
710 <title><indexterm><primary>encryption threat model</primary>
711 </indexterm>Client-side encryption threat model</title>
714 <para><emphasis role="bold">Offline attacks</emphasis></para>
715 <para>For the Lustre case, block devices are Lustre targets attached
716 to the Lustre servers. Manipulating the filesystem offline means
717 accessing the filesystem on these targets while Lustre is offline.
719 <para>Provided that a strong encryption key is chosen,
720 <literal>fscrypt</literal> protects the confidentiality of file
721 contents in the event of a single point-in-time permanent offline
722 compromise of the block device content.
723 Lustre client-side encryption does not protect the confidentiality
724 of metadata, e.g. file names, file sizes, file permissions, file
725 timestamps, and extended attributes. Also, the existence and
726 location of holes (unallocated blocks which logically contain all
727 zeroes) in files is not protected.</para>
730 <para><emphasis role="bold">Online attacks</emphasis></para>
733 <para>On Lustre client</para>
734 <para>After an encryption key has been added,
735 <literal>fscrypt</literal> does not hide the plaintext file
736 contents or filenames from other users on the same node.
737 Instead, existing access control mechanisms such as file mode
738 bits, POSIX ACLs, LSMs, or namespaces should be used for this
740 <para>For the Lustre case, it means plaintext file contents or
741 filenames are not hidden from other users on the same Lustre
743 <para>An attacker who compromises the system enough to read from
744 arbitrary memory, e.g. by exploiting a kernel security
745 vulnerability, can compromise all encryption keys that are
747 However, <literal>fscrypt</literal> allows encryption keys to
748 be removed from the kernel, which may protect them from later
749 compromise. Key removal can be carried out by non-root users.
750 In more detail, the key removal will wipe the master encryption
751 key from kernel memory. Moreover, it will try to evict all
752 cached inodes which had been "unlocked" using the key, thereby
753 wiping their per-file keys and making them once again appear
754 "locked", i.e. in ciphertext or encrypted form.</para>
757 <para>On Lustre server</para>
758 <para>An attacker on a Lustre server who compromises the system
759 enough to read arbitrary memory, e.g. by exploiting a kernel
760 security vulnerability, cannot compromise Lustre files content.
761 Indeed, encryption keys are not forwarded to the Lustre servers,
762 and servers do not carry out decryption or encryption.
763 Moreover, bulk RPCs received by servers contain encrypted data,
764 which is written as-is to the underlying filesystem.</para>
770 <section xml:id="managingSecurity.clientencryption.fscrypt" remap="h3">
771 <title><indexterm><primary>encryption fscrypt policy</primary>
772 </indexterm>Manage encryption on directories</title>
773 <para>By default, Lustre client-side encryption is enabled, letting users
774 define encryption policies on a per-directory basis.</para>
775 <note><para>Administrators can decide to prevent a Lustre client
776 mount-point from using encryption by specifying the
777 <literal>noencrypt</literal> client mount option. This can be also
778 enforced from server side thanks to the
779 <literal>forbid_encryption</literal> property on nodemaps. See
780 <xref linkend="alteringproperties"/> for how to manage nodemaps.
782 <para><literal>fscrypt</literal> userspace tool can be used to manage
783 encryption policies. See https://github.com/google/fscrypt for
784 comprehensive explanations. Below are examples on how to use this tool
785 with Lustre. If not told otherwise, commands must be run on Lustre
789 <para>Two preliminary steps are required before actually deciding
790 which directories to encrypt, and this is the only
791 functionality which requires root privileges. Administrator has to
793 <screen># fscrypt setup
794 Customizing passphrase hashing difficulty for this system...
795 Created global config file at "/etc/fscrypt.conf".
796 Metadata directories created at "/.fscrypt".</screen>
797 <para>This first command has to be run on all clients that want to use
798 encryption, as it sets up global fscrypt parameters outside of
800 <screen># fscrypt setup /mnt/lustre
801 Metadata directories created at "/mnt/lustre/.fscrypt"</screen>
802 <para>This second command has to be run on just one Lustre
804 <note><para>The file <literal>/etc/fscrypt.conf</literal> can be
805 edited. It is strongly recommended to set
806 <literal>policy_version</literal> to 2, so that
807 <literal>fscrypt</literal> wipes files from memory when the
808 encryption key is removed.</para></note>
811 <para>Now a regular user is able to select a directory to
813 <screen>$ fscrypt encrypt /mnt/lustre/vault
814 The following protector sources are available:
815 1 - Your login passphrase (pam_passphrase)
816 2 - A custom passphrase (custom_passphrase)
817 3 - A raw 256-bit key (raw_key)
818 Enter the source number for the new protector [2 - custom_passphrase]: 2
819 Enter a name for the new protector: shield
820 Enter custom passphrase for protector "shield":
822 "/mnt/lustre/vault" is now encrypted, unlocked, and ready for use.</screen>
823 <para>Starting from here, all files and directories created under
824 <literal>/mnt/lustre/vault</literal> will be encrypted, according
825 to the policy defined at the previsous step.</para>
826 <note><para>The encryption policy is inherited by all subdirectories.
827 It is not possible to change the policy for a subdirectory.</para>
831 <para>Another user can decide to encrypt a different directory with
832 its own protector:</para>
833 <screen>$ fscrypt encrypt /mnt/lustre/private
834 Should we create a new protector? [y/N] Y
835 The following protector sources are available:
836 1 - Your login passphrase (pam_passphrase)
837 2 - A custom passphrase (custom_passphrase)
838 3 - A raw 256-bit key (raw_key)
839 Enter the source number for the new protector [2 - custom_passphrase]: 2
840 Enter a name for the new protector: armor
841 Enter custom passphrase for protector "armor":
843 "/mnt/lustre/private" is now encrypted, unlocked, and ready for use.</screen>
846 <para>Users can decide to lock an encrypted directory at any
848 <screen>$ fscrypt lock /mnt/lustre/vault
849 "/mnt/lustre/vault" is now locked.</screen>
850 <para>This action prevents access to encrypted content, and by
851 removing the key from memory, it also wipes files from memory if
852 they are not still open.</para>
855 <para>Users regain access to the encrypted directory with the command:
857 <screen>$ fscrypt unlock /mnt/lustre/vault
858 Enter custom passphrase for protector "shield":
859 "/mnt/lustre/vault" is now unlocked and ready for use.</screen>
862 <para>Actually, <literal>fscrypt</literal> does not give direct access
863 to master keys, but to protectors that are used to encrypt them.
864 This mechanism gives the ability to change a passphrase:</para>
865 <screen>$ fscrypt status /mnt/lustre
866 lustre filesystem "/mnt/lustre" has 2 protectors and 2 policies
868 PROTECTOR LINKED DESCRIPTION
869 deacab807bf0e788 No custom protector "shield"
870 e691ae7a1990fc2a No custom protector "armor"
872 POLICY UNLOCKED PROTECTORS
873 52b2b5aff0e59d8e0d58f962e715862e No deacab807bf0e788
874 374e8944e4294b527e50363d86fc9411 No e691ae7a1990fc2a
876 $ fscrypt metadata change-passphrase --protector=/mnt/lustre:deacab807bf0e788
877 Enter old custom passphrase for protector "shield":
878 Enter new custom passphrase for protector "shield":
880 Passphrase for protector deacab807bf0e788 successfully changed.</screen>
881 <para>It makes also possible to have multiple protectors for the same
882 policy. This is really useful when several users share an encrypted
883 directory, because it avoids the need to share any secret between
885 <screen>$ fscrypt status /mnt/lustre/vault
886 "/mnt/lustre/vault" is encrypted with fscrypt.
888 Policy: 52b2b5aff0e59d8e0d58f962e715862e
889 Options: padding:32 contents:AES_256_XTS filenames:AES_256_CTS policy_version:2
892 Protected with 1 protector:
893 PROTECTOR LINKED DESCRIPTION
894 deacab807bf0e788 No custom protector "shield"
896 $ fscrypt metadata create protector /mnt/lustre
897 Create new protector on "/mnt/lustre" [Y/n] Y
898 The following protector sources are available:
899 1 - Your login passphrase (pam_passphrase)
900 2 - A custom passphrase (custom_passphrase)
901 3 - A raw 256-bit key (raw_key)
902 Enter the source number for the new protector [2 - custom_passphrase]: 2
903 Enter a name for the new protector: bunker
904 Enter custom passphrase for protector "bunker":
906 Protector f3cc1b5cf9b8f41c created on filesystem "/mnt/lustre".
908 $ fscrypt metadata add-protector-to-policy
909 --protector=/mnt/lustre:f3cc1b5cf9b8f41c
910 --policy=/mnt/lustre:52b2b5aff0e59d8e0d58f962e715862e
911 WARNING: All files using this policy will be accessible with this protector!!
912 Protect policy 52b2b5aff0e59d8e0d58f962e715862e with protector f3cc1b5cf9b8f41c? [Y/n] Y
913 Enter custom passphrase for protector "bunker":
914 Enter custom passphrase for protector "shield":
915 Protector f3cc1b5cf9b8f41c now protecting policy 52b2b5aff0e59d8e0d58f962e715862e.
917 $ fscrypt status /mnt/lustre/vault
918 "/mnt/lustre/vault" is encrypted with fscrypt.
920 Policy: 52b2b5aff0e59d8e0d58f962e715862e
921 Options: padding:32 contents:AES_256_XTS filenames:AES_256_CTS policy_version:2
924 Protected with 2 protectors:
925 PROTECTOR LINKED DESCRIPTION
926 deacab807bf0e788 No custom protector "shield"
927 f3cc1b5cf9b8f41c No custom protector "bunker"</screen>
932 <section xml:id="managingSecurity.kerberos">
933 <title><indexterm><primary>Kerberos</primary></indexterm>
934 Configuring Kerberos (KRB) Security</title>
935 <para>This chapter describes how to use Kerberos with Lustre.</para>
936 <section xml:id="managingSecurity.kerberos.whatisit">
937 <title>What Is Kerberos?</title>
938 <para>Kerberos is a mechanism for authenticating all entities (such as
939 users and servers) on an "unsafe" network. Each of these
940 entities, known as "principals", negotiate a runtime key with
941 the Kerberos server. This key enables principals to verify that messages
942 from the Kerberos server are authentic. By trusting the Kerberos server,
943 users and services can authenticate one another.</para>
944 <para>Setting up Lustre with Kerberos can provide advanced security
945 protections for the Lustre network. Broadly, Kerberos offers three types
949 <para>Allows Lustre connection peers (MDS, OSS and clients) to
950 authenticate one another.</para>
953 <para>Protects the integrity of PTLRPC messages from being modified
954 during network transfer.</para>
957 <para>Protects the privacy of the PTLRPC message from being
958 eavesdropped during network transfer.</para>
961 <para>Kerberos uses the “kernel keyring” client upcall mechanism.</para>
963 <section xml:id="managingSecurity.kerberos.securityflavor">
964 <title>Security Flavor</title>
966 A security flavor is a string to describe what kind authentication
967 and data transformation be performed upon a PTLRPC connection. It
968 covers both RPC message and BULK data.
971 The supported flavors are described in following table:
975 <colspec align="left" />
976 <colspec align="left" />
977 <colspec align="left" />
978 <colspec align="left" />
979 <colspec align="left" />
989 RPC Message Protection
1002 <emphasis><emphasis role="strong">null</emphasis></emphasis>
1016 <emphasis><emphasis role="strong">krb5n</emphasis></emphasis>
1028 No protection of RPC message, checksum protection
1029 of bulk data, light performance overhead.
1034 <emphasis><emphasis role="strong">krb5a</emphasis></emphasis>
1040 partial integrity (krb5)
1046 Only header of RPC message is integrity protected, and
1047 checksum protection of bulk data, more performance
1048 overhead compare to krb5n.
1053 <emphasis><emphasis role="strong">krb5i</emphasis></emphasis>
1065 transformation algorithm is determined by actual Kerberos
1066 algorithms enforced by KDC and principals; heavy performance
1072 <emphasis><emphasis role="strong">krb5p</emphasis></emphasis>
1084 transformation privacy protection algorithm is determined
1085 by actual Kerberos algorithms enforced by KDC and principals;
1086 the heaviest performance penalty.
1093 <section xml:id="managingSecurity.kerberos.kerberossetup">
1094 <title>Kerberos Setup</title>
1095 <section xml:id="managingSecurity.kerberos.kerberossetup.distribution">
1096 <title>Distribution</title>
1097 <para>We only support MIT Kerberos 5, from version 1.3.</para>
1098 <para>For environmental requirements in general, and clock
1099 synchronization in particular, please refer to section
1100 <xref linkend="section_rh2_d4w_gk"/>.</para>
1102 <section xml:id="managingSecurity.kerberos.kerberossetup.configuration">
1103 <title>Principals Configuration</title>
1106 <para>Configure client nodes:</para>
1110 For each client node, create a <literal>lustre_root</literal>
1111 principal and generate keytab.
1113 <screen>kadmin> addprinc -randkey lustre_root/client_host.domain@REALM</screen>
1114 <screen>kadmin> ktadd lustre_root/client_host.domain@REALM</screen>
1118 Install the keytab on the client node.
1124 <para>Configure MGS nodes:</para>
1128 For each MGS node, create a <literal>lustre_mgs</literal>
1129 principal and generate keytab.
1131 <screen>kadmin> addprinc -randkey lustre_mgs/mgs_host.domain@REALM</screen>
1132 <screen>kadmin> ktadd lustre_mds/mgs_host.domain@REALM</screen>
1136 Install the keytab on the MGS nodes.
1142 <para>Configure MDS nodes:</para>
1146 For each MDS node, create a <literal>lustre_mds</literal>
1147 principal and generate keytab.
1149 <screen>kadmin> addprinc -randkey lustre_mds/mds_host.domain@REALM</screen>
1150 <screen>kadmin> ktadd lustre_mds/mds_host.domain@REALM</screen>
1154 Install the keytab on the MDS nodes.
1160 <para>Configure OSS nodes:</para>
1164 For each OSS node, create a <literal>lustre_oss</literal>
1165 principal and generate keytab.
1167 <screen>kadmin> addprinc -randkey lustre_oss/oss_host.domain@REALM</screen>
1168 <screen>kadmin> ktadd lustre_oss/oss_host.domain@REALM</screen>
1172 Install the keytab on the client node.
1181 <para>The <emphasis>host.domain</emphasis> should be the FQDN in
1182 your network, otherwise server might not recognize any GSS
1187 As an alternative for the client keytab, if you want to save
1188 the trouble of assigning unique keytab for each client node,
1189 you can create a general lustre_root principal and its
1190 keytab, and install the same keytab on as many client nodes
1191 as you want. <emphasis role="strong">Be aware that in
1192 this way one compromised client means all clients are
1193 insecure</emphasis>.
1195 <screen>kadmin> addprinc -randkey lustre_root@REALM</screen>
1196 <screen>kadmin> ktadd lustre_root@REALM</screen>
1200 Lustre support following <emphasis>enctypes</emphasis> for
1201 MIT Kerberos 5 version 1.3 or higher:
1206 <emphasis>aes128-cts</emphasis>
1211 <emphasis>aes256-cts</emphasis>
1220 <section xml:id="managingSecurity.kerberos.network">
1221 <title>Networking</title>
1222 <para>On networks for which name resolution to IP address is possible,
1223 like TCP or InfiniBand, the names used in the principals must be the
1224 ones that resolve to the IP addresses used by the Lustre NIDs.</para>
1225 <para>If you are using a network which is
1226 <emphasis role="strong">NOT</emphasis> TCP or InfiniBand (e.g.
1227 PTL4LND), you need to have a <literal>/etc/lustre/nid2hostname</literal>
1228 script on <emphasis role="strong">each</emphasis> node, which purpose is
1229 to translate NID into hostname.
1230 Following is a possible example for PTL4LND:</para>
1234 # convert a NID for a LND to a hostname
1236 # called with thre arguments: lnd netid nid
1237 # $lnd is the string "PTL4LND", etc.
1238 # $netid is the network identifier in hex string format
1239 # $nid is the NID in hex format
1240 # output the corresponding hostname,
1241 # or error message leaded by a '@' for error logging.
1245 # convert hex NID number to decimal
1249 PTL4LND) # simply add 'node' at the beginning
1253 echo "@unknown LND: $lnd"
1257 <section xml:id="managingSecurity.kerberos.requiredpackages">
1258 <title>Required packages</title>
1260 Every node should have following packages installed:
1264 <para>krb5-workstation</para>
1267 <para>krb5-libs</para>
1270 <para>keyutils</para>
1273 <para>keyutils-libs</para>
1276 <para>On the node used to build Lustre with GSS support, following
1277 packages should be installed:</para>
1280 <para>krb5-devel</para>
1283 <para>keyutils-libs-devel</para>
1287 <section xml:id="managingSecurity.kerberos.buildlustre">
1288 <title>Build Lustre</title>
1290 Enable GSS at configuration time:
1292 <screen>./configure --enable-gss --other-options</screen>
1294 <section xml:id="managingSecurity.kerberos.running">
1295 <title>Running</title>
1296 <section xml:id="managingSecurity.kerberos.running.gssdaemons">
1297 <title>GSS Daemons</title>
1299 Make sure to start the daemon process
1300 <literal>lsvcgssd</literal> on each server node (MGS, MDS and OSS)
1301 before starting Lustre. The command syntax is:
1303 <screen>lsvcgssd [-f] [-v] [-g] [-m] [-o] -k</screen>
1306 <para>-f: run in foreground, instead of as daemon</para>
1309 <para>-v: increase verbosity by 1. For example, to set the verbose
1310 level to 3, run 'lsvcgssd -vvv'. Verbose logging can help you make
1311 sure Kerberos is set up correctly.
1315 <para>-g: service MGS</para>
1318 <para>-m: service MDS</para>
1321 <para>-o: service OSS</para>
1324 <para>-k: enable kerberos support</para>
1328 <section xml:id="managingSecurity.kerberos.running.settingsecurityflavors">
1329 <title>Setting Security Flavors</title>
1331 Security flavors can be set by defining sptlrpc rules on the MGS.
1332 These rules are persistent, and are in the form:
1333 <literal><spec>=<flavor></literal>
1337 <para>To add a rule:</para>
1338 <screen>mgs> lctl conf_param <spec>=<flavor></screen>
1340 If there is an existing rule on <spec>, it will be
1344 <para>To delete a rule:</para>
1345 <screen>mgs> lctl conf_param -d <spec></screen>
1348 <para>To list existing rules:</para>
1349 <screen>msg> lctl get_param mgs.MGS.live.<fs-name> | grep "srpc.flavor"</screen>
1355 <para>If nothing is specified, by default all RPC connections will
1356 use <literal>null</literal> flavor, which means no security.
1361 After you change a rule, it usually takes a few minutes to apply
1362 the new rule to all nodes, depending on global system load.
1367 Before you change a rule, make sure affected nodes are ready
1368 for the new security flavor. E.g. if you change flavor from
1369 <literal>null</literal> to <literal>krb5p</literal>
1370 but GSS/Kerberos environment is not properly configured on
1371 affected nodes, those nodes might be evicted because they cannot
1372 communicate with each other.
1378 <section xml:id="managingSecurity.kerberos.running.rulessyntaxexamples">
1379 <title>Rules Syntax & Examples</title>
1381 The general syntax is:
1383 <target>.srpc.flavor.<network>[.<direction>]=flavor
1388 <literal><target></literal> can be filesystem name, or
1389 specific MDT/OST device name. For example
1390 <literal>testfs</literal>,
1391 <literal>testfs-MDT0000</literal>,
1392 <literal>testfs-OST0001</literal>.
1397 <literal><network></literal> is the LNet network name, for
1398 example <literal>tcp0</literal>, <literal>o2ib0</literal>, or
1399 <literal>default</literal> to not filter on LNet network.
1404 <literal><direction></literal> can be one of
1405 <emphasis>cli2mdt</emphasis>, <emphasis>cli2ost</emphasis>,
1406 <emphasis>mdt2mdt</emphasis>, <emphasis>mdt2ost</emphasis>.
1407 Direction is optional.
1417 Apply <literal>krb5i</literal> on
1418 <emphasis role="strong">ALL</emphasis> connections for file system
1419 <literal>testfs</literal>:
1423 <screen>mgs> lctl conf_param testfs.srpc.flavor.default=krb5i</screen>
1427 Nodes in network <literal>tcp0</literal> use
1428 <literal>krb5p</literal>; all other nodes use
1429 <literal>null</literal>.
1433 <screen>mgs> lctl conf_param testfs.srpc.flavor.tcp0=krb5p
1434 mgs> lctl conf_param testfs.srpc.flavor.default=null</screen>
1438 Nodes in network <literal>tcp0</literal> use
1439 <literal>krb5p</literal>; nodes in
1440 <literal>o2ib0</literal> use <literal>krb5n</literal>;
1441 among other nodes, clients use <literal>krb5i</literal>
1442 to MDT/OST, MDTs use <literal>null</literal> to other MDTs,
1443 MDTs use <literal>krb5a</literal> to OSTs.
1447 <screen>mgs> lctl conf_param testfs.srpc.flavor.tcp0=krb5p
1448 mgs> lctl conf_param testfs.srpc.flavor.o2ib0=krb5n
1449 mgs> lctl conf_param testfs.srpc.flavor.default.cli2mdt=krb5i
1450 mgs> lctl conf_param testfs.srpc.flavor.default.cli2ost=krb5i
1451 mgs> lctl conf_param testfs.srpc.flavor.default.mdt2mdt=null
1452 mgs> lctl conf_param testfs.srpc.flavor.default.mdt2ost=krb5a</screen>
1454 <section xml:id="managingSecurity.kerberos.running.authenticatenormalusers">
1455 <title>Regular Users Authentication</title>
1457 On client nodes, non-root users need to issue
1458 <literal>kinit</literal> before accessing Lustre, just like other
1459 Kerberized applications.
1464 Required by kerberos, the user's principal
1465 (<literal>username@REALM</literal>) should be added to the KDC.
1470 Client and MDT nodes should have the same user database
1471 used for name and uid/gid translation.
1476 Regular users can destroy the established security contexts before
1477 logging out, by issuing:
1479 <screen>lfs flushctx -k -r <mount point></screen>
1481 Here <literal>-k</literal> is to destroy the on-disk Kerberos
1482 credential cache, similar to <literal>kdestroy</literal>, and
1483 <literal>-r</literal> is to reap the revoked keys from the keyring
1484 when flushing the GSS context. Otherwise it only destroys established
1485 contexts in kernel memory.
1489 <section xml:id="managingSecurity.kerberos.securemgsconnection">
1490 <title>Secure MGS connection</title>
1492 Each node can specify which flavor to use to connect to the MGS, by
1493 using the <literal>mgssec=flavor</literal> mount option.
1494 Once a flavor is chosen, it cannot be changed until re-mount.
1497 Because a Lustre node only has one connection to the MGS, if there is
1498 more than one target or client on the node, they necessarily use the
1499 same security flavor to the MGS, being the one enforced when the first
1500 connection to the MGS was established.
1503 By default, the MGS accepts RPCs with any flavor. But it is possible to
1504 configure the MGS to only accept a given flavor. The syntax is identical
1505 to what is explained in paragraph
1506 <xref linkend="managingSecurity.kerberos.running.rulessyntaxexamples"/>,
1507 but with special target <literal>_mgs</literal>:
1509 <screen>mgs> lctl conf_param _mgs.srpc.flavor.<network>=<flavor></screen>
1514 vim:expandtab:shiftwidth=2:tabstop=8: