From 1791cdb9ebfe6dbcdf480e5db3cc5ce17b520b03 Mon Sep 17 00:00:00 2001 From: Sebastien Buisson Date: Fri, 7 Aug 2020 17:48:01 +0200 Subject: [PATCH] LUDOC-477 sec: doc for client-side encryption This patch adds documentation for the client-side encryption feature, as implemented by LU-12275. This doc is added under the Managing Security in a Lustre File System section. Signed-off-by: Sebastien Buisson Change-Id: I7fac0591faef517f7e74d055d7845a94246b6ffc Reviewed-on: https://review.whamcloud.com/39602 Reviewed-by: Andreas Dilger Tested-by: jenkins --- ManagingSecurity.xml | 548 ++++++++++++++++++++++++++++++++++++++++++++------- 1 file changed, 472 insertions(+), 76 deletions(-) diff --git a/ManagingSecurity.xml b/ManagingSecurity.xml index 6154d94..ae4ce51 100644 --- a/ManagingSecurity.xml +++ b/ManagingSecurity.xml @@ -18,6 +18,9 @@ + + +
<indexterm><primary>Access Control List (ACL)</primary></indexterm> @@ -158,10 +161,10 @@ other::---</screen> root squash feature also enables the Lustre file system administrator to specify a set of client for which UID/GID re-mapping does not apply. </para> - <note><para>Nodemaps (<xref linkend="lustrenodemap.title" />) are an - alternative to root squash, since it also allows root squash on a per-client - basis. With UID maps, the clients can even have a local root UID without - actually having root access to the filesystem itself.</para></note> + <note><para>Nodemaps (<xref linkend="lustrenodemap.title" />) are an + alternative to root squash, since it also allows root squash on a per-client + basis. With UID maps, the clients can even have a local root UID without + actually having root access to the filesystem itself.</para></note> <section xml:id="managingSecurity.root_squash.config" remap="h3"> <title><indexterm> <primary>root squash</primary> @@ -345,12 +348,12 @@ lctl get_param mdt.*.nosquash_nids</screen> This can be achieved by having physical hardware and/or network security, so that client nodes have well-known NIDs. It is also possible to make use of strong authentication with Kerberos or Shared-Secret Key - (see <xref linkend="lustressk" />). - Kerberos prevents NID spoofing, as every client needs its own - credentials, based on its NID, in order to connect to the servers. - Shared-Secret Key also prevents tenant impersonation, because keys - can be linked to a specific nodemap. See - <xref linkend="ssknodemaprole" /> for detailed explanations. + (see <xref linkend="lustressk" />). + Kerberos prevents NID spoofing, as every client needs its own + credentials, based on its NID, in order to connect to the servers. + Shared-Secret Key also prevents tenant impersonation, because keys + can be linked to a specific nodemap. See + <xref linkend="ssknodemaprole" /> for detailed explanations. </para> </section> <section xml:id="managingSecurity.isolation.configuring" remap="h3"> @@ -358,20 +361,20 @@ lctl get_param mdt.*.nosquash_nids</screen> configuring</secondary></indexterm>Configuring Isolation Isolation on Lustre can be achieved by setting the fileset parameter on a nodemap entry. All clients - belonging to this nodemap entry will automatically mount this fileset - instead of the root directory. For example: + belonging to this nodemap entry will automatically mount this fileset + instead of the root directory. For example: mgs# lctl nodemap_set_fileset --name tenant1 --fileset '/dir1' So all clients matching the tenant1 nodemap will be automatically presented the fileset /dir1 when - mounting. This means these clients are doing an implicit subdirectory - mount on the subdirectory /dir1. + mounting. This means these clients are doing an implicit subdirectory + mount on the subdirectory /dir1. - If subdirectory defined as fileset does not exist on the file system, - it will prevent any client belonging to the nodemap from mounting - Lustre. - + If subdirectory defined as fileset does not exist on the file system, + it will prevent any client belonging to the nodemap from mounting + Lustre. + To delete the fileset parameter, just set it to an empty string: @@ -435,51 +438,51 @@ mgs# lctl set_param -P nodemap.tenant1.fileset=/dir1 A string that represents the SELinux Status info will be used by servers as a reference, to check if clients are enforcing SELinux - properly. This reference string can be obtained on a client node known - to enforce the right SELinux policy, by calling the - l_getsepol command line utility: - client# l_getsepol + properly. This reference string can be obtained on a client node known + to enforce the right SELinux policy, by calling the + l_getsepol command line utility: + client# l_getsepol SELinux status info: 1:mls:31:40afb76d077c441b69af58cccaaa2ca63641ed6e21b0a887dc21a684f508b78f - The string describing the SELinux policy has the following - syntax: - mode:name:version:hash - where: - - - mode is a digit telling if SELinux is in - Permissive mode (0) or Enforcing mode (1) - - - name is the name of the SELinux policy - - - - version is the version of the SELinux - policy - - - hash is the computed hash of the binary - representation of the policy, as exported in - /etc/selinux/name/policy/policy. - version - - + The string describing the SELinux policy has the following + syntax: + mode:name:version:hash + where: + + + mode is a digit telling if SELinux is in + Permissive mode (0) or Enforcing mode (1) + + + name is the name of the SELinux policy + + + + version is the version of the SELinux + policy + + + hash is the computed hash of the binary + representation of the policy, as exported in + /etc/selinux/name/policy/policy. + version + +
<indexterm><primary>selinux policy check</primary><secondary> enforcing</secondary></indexterm>Enforcing SELinux Policy Check SELinux policy check can be enforced by setting the sepol parameter on a nodemap entry. All clients - belonging to this nodemap entry must enforce the SELinux policy - described by this parameter, otherwise they are denied access to the - Lustre file system. For example: + belonging to this nodemap entry must enforce the SELinux policy + described by this parameter, otherwise they are denied access to the + Lustre file system. For example: mgs# lctl nodemap_set_sepol --name restricted --sepol '1:mls:31:40afb76d077c441b69af58cccaaa2ca63641ed6e21b0a887dc21a684f508b78f' So all clients matching the restricted nodemap must enforce the SELinux policy which description matches - 1:mls:31:40afb76d077c441b69af58cccaaa2ca63641ed6e21b0a887dc21a684f508b78f. - If not, they will get Permission Denied when trying to mount or access - files on the Lustre file system. + 1:mls:31:40afb76d077c441b69af58cccaaa2ca63641ed6e21b0a887dc21a684f508b78f. + If not, they will get Permission Denied when trying to mount or access + files on the Lustre file system. To delete the sepol parameter, just set it to an empty string: mgs# lctl nodemap_set_sepol --name restricted --sepol '' @@ -489,7 +492,7 @@ SELinux status info: 1:mls:31:40afb76d077c441b69af58cccaaa2ca63641ed6e21b0a887dc
<indexterm><primary>selinux policy check</primary><secondary> making permanent</secondary></indexterm>Making SELinux Policy Check - Permanent + Permanent In order to make SELinux Policy check permanent, the sepol parameter on the nodemap has to be set with lctl set_param with the -P option. @@ -502,30 +505,423 @@ mgs# lctl set_param -P nodemap.restricted.sepol=1:mls:31:40afb76d077c441b69af58c
<indexterm><primary>selinux policy check</primary><secondary> sending client</secondary></indexterm>Sending SELinux Status Info from - Clients + Clients In order for Lustre clients to send their SELinux status - information, in case SELinux is enabled locally, the - send_sepol ptlrpc kernel module's parameter has to be - set to a non-zero value. send_sepol accepts various - values: - - - 0: do not send SELinux policy info; - - - -1: fetch SELinux policy info for every request; - - - N > 0: only fetch SELinux policy info every N seconds. Use - N = 2^31-1 to have SELinux policy info - fetched only at mount time. - - - Clients that are part of a nodemap on which - sepol is defined must send SELinux status info. - And the SELinux policy they enforce must match the representation - stored into the nodemap. Otherwise they will be denied access to the - Lustre file system. + information, in case SELinux is enabled locally, the + send_sepol ptlrpc kernel module's parameter has to be + set to a non-zero value. send_sepol accepts various + values: + + + 0: do not send SELinux policy info; + + + -1: fetch SELinux policy info for every request; + + + N > 0: only fetch SELinux policy info every N seconds. Use + N = 2^31-1 to have SELinux policy info + fetched only at mount time. + + + Clients that are part of a nodemap on which + sepol is defined must send SELinux status info. + And the SELinux policy they enforce must match the representation + stored into the nodemap. Otherwise they will be denied access to the + Lustre file system. +
+
+
+ <indexterm><primary>Client-side encryption</primary></indexterm> + Encrypting files and directories + The purpose that client-side encryption wants to serve is to be able + to provide a special directory for each user, to safely store sensitive + files. The goals are to protect data in transit between clients and + servers, and protect data at rest. + This feature is implemented directly at the Lustre client level. + Lustre client-side encryption relies on kernel fscrypt. + fscrypt is a library which filesystems can hook into to + support transparent encryption of files and directories. As a consequence, + the key points described below are extracted from + fscrypt documentation. + For full details, please refer to documentation available with the + Lustre sources, under the + Documentation/client_side_encryption directory. + + The client-side encryption feature is available on Lustre + clients running a Linux distribution with at least kernel 5.4, or have + backported the fscrypt v2 support, including: + + CentOS/RHEL 8.1 and later; + Ubuntu 18.04 and later; + SLES 15 SP2 and later. + + +
+ <indexterm><primary>encryption access semantics</primary> + </indexterm>Client-side encryption access semantics + Only Lustre clients need access to encryption master keys. Keys are + added to the filesystem-level encryption keyring on the Lustre client. + + + With the key + With the encryption key, encrypted regular files, directories, + and symlinks behave very similarly to their unencrypted + counterparts --- after all, the encryption is intended to be + transparent. However, astute users may notice some differences in + behavior: + + + Unencrypted files, or files encrypted with a different + encryption policy (i.e. different key, modes, or flags), + cannot be renamed or linked into an encrypted directory. + However, encrypted files can be renamed within an encrypted + directory, or into an unencrypted directory. + "moving" an unencrypted file into an encrypted + directory, e.g. with the mv program, is + implemented in userspace by a copy followed by a delete. Be + aware the original unencrypted data may remain recoverable + from free space on the disk; it is best to keep all files + encrypted from the very beginning. + + On Lustre, Direct I/O is supported for encrypted + files. + + The fallocate() operations + FALLOC_FL_COLLAPSE_RANGE, + FALLOC_FL_INSERT_RANGE, and + FALLOC_FL_ZERO_RANGE are not + supported on encrypted files and will fail with + EOPNOTSUPP. + + + DAX (Direct Access) is not supported on encrypted + files. + + mmap is supported. This is + possible because the pagecache for an encrypted file contains + the plaintext, not the ciphertext. + + + + + Without the key + Some filesystem operations may be performed on encrypted + regular files, directories, and symlinks even before their + encryption key has been added, or after their encryption key has + been removed: + + + File metadata may be read, e.g. using + stat(). + + + Directories may be listed, and the whole namespace tree + may be walked through. + + + + Files may be deleted. That is, nondirectory files may be + deleted with unlink() as usual, and empty + directories may be deleted with rmdir() as + usual. Therefore, rm and + rm -r will work as expected. + + + Symlink targets may be read and followed, but they will + be presented in encrypted form, similar to filenames in + directories. Hence, they are unlikely to point to anywhere + useful. + + + Without the key, regular files cannot be opened or truncated. + Attempts to do so will fail with ENOKEY. This + implies that any regular file operations that require a file + descriptor, such as read(), + write(), mmap(), + fallocate(), and ioctl(), + are also forbidden. + Also without the key, files of any type (including + directories) cannot be created or linked into an encrypted + directory, nor can a name in an encrypted directory be the source + or target of a rename, nor can an O_TMPFILE + temporary file be created in an encrypted directory. All such + operations will fail with ENOKEY. + It is not currently possible to backup and restore encrypted + files without the encryption key. This would require special + APIs which have not yet been implemented. + + + Encryption policy enforcement + + After an encryption policy has been set on a directory, all + regular files, directories, and symbolic links created in that + directory (recursively) will inherit that encryption policy. + Special files --- that is, named pipes, device nodes, and UNIX + domain sockets --- will not be encrypted. + Except for those special files, it is forbidden to have + unencrypted files, or files encrypted with a different encryption + policy, in an encrypted directory tree. + + + +
+
+ <indexterm><primary>encryption key hierarchy</primary> + </indexterm>Client-side encryption key hierarchy + Each encrypted directory tree is protected by a master key. + To "unlock" an encrypted directory tree, userspace must provide the + appropriate master key. There can be any number of master keys, each + of which protects any number of directory trees on any number of + filesystems. +
+
+ <indexterm><primary>encryption modes usage</primary> + </indexterm>Client-side encryption modes and usage + fscrypt allows one encryption mode to be + specified for file contents and one encryption mode to be specified for + filenames. Different directory trees are permitted to use different + encryption modes. Currently, the following pairs of encryption modes are + supported: + + + AES-256-XTS for contents and AES-256-CTS-CBC for filenames + + + + AES-128-CBC for contents and AES-128-CTS-CBC for filenames + + + + If unsure, you should use the (AES-256-XTS, AES-256-CTS-CBC) pair. + + In Lustre 2.14, client-side encryption only supports + content encryption, and not filename encryption. As a consequence, only + content encryption mode will be taken into account, and filename + encryption mode will be ignored to leave filenames in clear text. + +
+
+ <indexterm><primary>encryption threat model</primary> + </indexterm>Client-side encryption threat model + + + Offline attacks + For the Lustre case, block devices are Lustre targets attached + to the Lustre servers. Manipulating the filesystem offline means + accessing the filesystem on these targets while Lustre is offline. + + Provided that a strong encryption key is chosen, + fscrypt protects the confidentiality of file + contents in the event of a single point-in-time permanent offline + compromise of the block device content. + Lustre client-side encryption does not protect the confidentiality + of metadata, e.g. file names, file sizes, file permissions, file + timestamps, and extended attributes. Also, the existence and + location of holes (unallocated blocks which logically contain all + zeroes) in files is not protected. + + + Online attacks + + + On Lustre client + After an encryption key has been added, + fscrypt does not hide the plaintext file + contents or filenames from other users on the same node. + Instead, existing access control mechanisms such as file mode + bits, POSIX ACLs, LSMs, or namespaces should be used for this + purpose. + For the Lustre case, it means plaintext file contents or + filenames are not hidden from other users on the same Lustre + client. + An attacker who compromises the system enough to read from + arbitrary memory, e.g. by exploiting a kernel security + vulnerability, can compromise all encryption keys that are + currently in use. + However, fscrypt allows encryption keys to + be removed from the kernel, which may protect them from later + compromise. Key removal can be carried out by non-root users. + In more detail, the key removal will wipe the master encryption + key from kernel memory. Moreover, it will try to evict all + cached inodes which had been "unlocked" using the key, thereby + wiping their per-file keys and making them once again appear + "locked", i.e. in ciphertext or encrypted form. + + + On Lustre server + An attacker on a Lustre server who compromises the system + enough to read arbitrary memory, e.g. by exploiting a kernel + security vulnerability, cannot compromise Lustre files content. + Indeed, encryption keys are not forwarded to the Lustre servers, + and servers do not carry out decryption or encryption. + Moreover, bulk RPCs received by servers contain encrypted data, + which is written as-is to the underlying filesystem. + + + + +
+
+ <indexterm><primary>encryption fscrypt policy</primary> + </indexterm>Manage encryption on directories + By default, Lustre client-side encryption is enabled, letting users + define encryption policies on a per-directory basis. + Administrators can decide to prevent a Lustre client + mount-point from using encryption by specifying the + noencrypt client mount option. This can be also + enforced from server side thanks to the + forbid_encryption property on nodemaps. See + for how to manage nodemaps. + + fscrypt userspace tool can be used to manage + encryption policies. See https://github.com/google/fscrypt for + comprehensive explanations. Below are examples on how to use this tool + with Lustre. If not told otherwise, commands must be run on Lustre + client side. + + + Two preliminary steps are required before actually deciding + which directories to encrypt, and this is the only + functionality which requires root privileges. Administrator has to + run: + # fscrypt setup +Customizing passphrase hashing difficulty for this system... +Created global config file at "/etc/fscrypt.conf". +Metadata directories created at "/.fscrypt". + This first command has to be run on all clients that want to use + encryption, as it sets up global fscrypt parameters outside of + Lustre. + # fscrypt setup /mnt/lustre +Metadata directories created at "/mnt/lustre/.fscrypt" + This second command has to be run on just one Lustre + client. + The file /etc/fscrypt.conf can be + edited. It is strongly recommended to set + policy_version to 2, so that + fscrypt wipes files from memory when the + encryption key is removed. + + + Now a regular user is able to select a directory to + encrypt: + $ fscrypt encrypt /mnt/lustre/vault +The following protector sources are available: +1 - Your login passphrase (pam_passphrase) +2 - A custom passphrase (custom_passphrase) +3 - A raw 256-bit key (raw_key) +Enter the source number for the new protector [2 - custom_passphrase]: 2 +Enter a name for the new protector: shield +Enter custom passphrase for protector "shield": +Confirm passphrase: +"/mnt/lustre/vault" is now encrypted, unlocked, and ready for use. + Starting from here, all files and directories created under + /mnt/lustre/vault will be encrypted, according + to the policy defined at the previsous step. + The encryption policy is inherited by all subdirectories. + It is not possible to change the policy for a subdirectory. + + + + Another user can decide to encrypt a different directory with + its own protector: + $ fscrypt encrypt /mnt/lustre/private +Should we create a new protector? [y/N] Y +The following protector sources are available: +1 - Your login passphrase (pam_passphrase) +2 - A custom passphrase (custom_passphrase) +3 - A raw 256-bit key (raw_key) +Enter the source number for the new protector [2 - custom_passphrase]: 2 +Enter a name for the new protector: armor +Enter custom passphrase for protector "armor": +Confirm passphrase: +"/mnt/lustre/private" is now encrypted, unlocked, and ready for use. + + + Users can decide to lock an encrypted directory at any + time: + $ fscrypt lock /mnt/lustre/vault +"/mnt/lustre/vault" is now locked. + This action prevents access to encrypted content, and by + removing the key from memory, it also wipes files from memory if + they are not still open. + + + Users regain access to the encrypted directory with the command: + + $ fscrypt unlock /mnt/lustre/vault +Enter custom passphrase for protector "shield": +"/mnt/lustre/vault" is now unlocked and ready for use. + + + Actually, fscrypt does not give direct access + to master keys, but to protectors that are used to encrypt them. + This mechanism gives the ability to change a passphrase: + $ fscrypt status /mnt/lustre +lustre filesystem "/mnt/lustre" has 2 protectors and 2 policies + +PROTECTOR LINKED DESCRIPTION +deacab807bf0e788 No custom protector "shield" +e691ae7a1990fc2a No custom protector "armor" + +POLICY UNLOCKED PROTECTORS +52b2b5aff0e59d8e0d58f962e715862e No deacab807bf0e788 +374e8944e4294b527e50363d86fc9411 No e691ae7a1990fc2a + +$ fscrypt metadata change-passphrase --protector=/mnt/lustre:deacab807bf0e788 +Enter old custom passphrase for protector "shield": +Enter new custom passphrase for protector "shield": +Confirm passphrase: +Passphrase for protector deacab807bf0e788 successfully changed. + It makes also possible to have multiple protectors for the same + policy. This is really useful when several users share an encrypted + directory, because it avoids the need to share any secret between + them. + $ fscrypt status /mnt/lustre/vault +"/mnt/lustre/vault" is encrypted with fscrypt. + +Policy: 52b2b5aff0e59d8e0d58f962e715862e +Options: padding:32 contents:AES_256_XTS filenames:AES_256_CTS policy_version:2 +Unlocked: No + +Protected with 1 protector: +PROTECTOR LINKED DESCRIPTION +deacab807bf0e788 No custom protector "shield" + +$ fscrypt metadata create protector /mnt/lustre +Create new protector on "/mnt/lustre" [Y/n] Y +The following protector sources are available: +1 - Your login passphrase (pam_passphrase) +2 - A custom passphrase (custom_passphrase) +3 - A raw 256-bit key (raw_key) +Enter the source number for the new protector [2 - custom_passphrase]: 2 +Enter a name for the new protector: bunker +Enter custom passphrase for protector "bunker": +Confirm passphrase: +Protector f3cc1b5cf9b8f41c created on filesystem "/mnt/lustre". + +$ fscrypt metadata add-protector-to-policy + --protector=/mnt/lustre:f3cc1b5cf9b8f41c + --policy=/mnt/lustre:52b2b5aff0e59d8e0d58f962e715862e +WARNING: All files using this policy will be accessible with this protector!! +Protect policy 52b2b5aff0e59d8e0d58f962e715862e with protector f3cc1b5cf9b8f41c? [Y/n] Y +Enter custom passphrase for protector "bunker": +Enter custom passphrase for protector "shield": +Protector f3cc1b5cf9b8f41c now protecting policy 52b2b5aff0e59d8e0d58f962e715862e. + +$ fscrypt status /mnt/lustre/vault +"/mnt/lustre/vault" is encrypted with fscrypt. + +Policy: 52b2b5aff0e59d8e0d58f962e715862e +Options: padding:32 contents:AES_256_XTS filenames:AES_256_CTS policy_version:2 +Unlocked: No + +Protected with 2 protectors: +PROTECTOR LINKED DESCRIPTION +deacab807bf0e788 No custom protector "shield" +f3cc1b5cf9b8f41c No custom protector "bunker" + +
-- 1.8.3.1