From f306cef170b7e88f791c3fe9308ba2ea5abac4a5 Mon Sep 17 00:00:00 2001 From: Ryan Haasken Date: Thu, 24 Jul 2014 15:52:07 -0500 Subject: [PATCH] LUDOC-251 hsm: Fix typos and wording in HSM section Fixed the HSM event names in section 22.7. They now match the enum hsm_event in the Lustre source, and their names now match their descriptions. Also fixed minor English usage mistakes in an effort to make the HSM chapter more readable. Signed-off-by: Ryan Haasken Change-Id: Ib331fce9a7789a8ee51f8063c5d0ddf9dd696887 Reviewed-on: http://review.whamcloud.com/11223 Tested-by: Jenkins Reviewed-by: Richard Henwood --- LustreHSM.xml | 76 +++++++++++++++++++++++++++++------------------------------ 1 file changed, 38 insertions(+), 38 deletions(-) diff --git a/LustreHSM.xml b/LustreHSM.xml index d8b021b..a9c6877 100644 --- a/LustreHSM.xml +++ b/LustreHSM.xml @@ -19,7 +19,7 @@ the Lustre file system. The process of copying a file into the HSM storage is known as archive. Once the archive is complete, the Lustre file -data can be delete (know as release.) The process of +data can be deleted (known as release.) The process of returning data from the HSM storage to the Lustre file system is called restore. The archive and restore operations require a Lustre file system component called an Agent. @@ -58,7 +58,7 @@ facet on the MDT called the Coordinator. a minimum of 2 clients, 1 used for your chosen computation task that generates - useful data, and 1 used as agent. + useful data, and 1 used as an agent. Multiple agents can be employed. All the agents need to share access @@ -75,7 +75,7 @@ facet on the MDT called the Coordinator. must be activated on each of your filesystem MDTs. This can be achieved with the command: $ lctl set_param mdt.$FSNAME-MDT0000.hsm_control=enabled mdt.lustre-MDT0000.hsm_control=enabled - To verify if the coordinator is running correctly + To verify that the coordinator is running correctly $ lctl get_param mdt.$FSNAME-MDT0000.hsm_control mdt.lustre-MDT0000.hsm_control=enabled @@ -86,9 +86,9 @@ mdt.lustre-MDT0000.hsm_control=enabled HSMagentsAgents - Once a coordinator is started launch the copytool on each agent node to connect to your HSM storage. If your HSM storage has POSIX access this command will be of the form: + Once a coordinator is started, launch the copytool on each agent node to connect to your HSM storage. If your HSM storage has POSIX access this command will be of the form: lhsmtool_posix --daemon --hsm-root $HSMPATH --archive=1 $LUSTREPATH - POSIX copytool must be stopped sending it a TERM signal. + The POSIX copytool must be stopped by sending it a TERM signal. @@ -101,10 +101,10 @@ mdt.lustre-MDT0000.hsm_control=enabled Agents are Lustre file system clients running copytool. copytool is a userspace daemon that transfers data between Lustre and a HSM solution. Because different HSM solutions use different APIs, copytools can typically only work with a -specific HSM. Only one copytool could be run by agent node. +specific HSM. Only one copytool can be run by an agent node. The following rule applies regarding copytool instances: a Lustre file -system only supports a single copytool process, per ARCHIVE ID (see below), +system only supports a single copytool process, per ARCHIVE ID (see below), per client node. Due to a Lustre software limitation, this constraint is irrespective of the number of Lustre file systems mounted by the Agent. @@ -125,15 +125,15 @@ ID must be in the range 1 to 32. You need, at least, one copytool per ARCHIVE ID. When using the POSIX copytool, this ID is defined using --archive switch. -For example: if a single Lustre file system is bound to 2 different HSMs (A and B,) ARCHIVE ID “1” can be chosen for HSM A and ARCHIVE ID “2” for HSM B. If you start 3 copytool instances for ARCHIVE ID 1, all of them will use Archive ID “1”. Same rule applies for copytool instances dealing with the HSM B, using Archive ID “2”. +For example: if a single Lustre file system is bound to 2 different HSMs (A and B,) ARCHIVE ID “1” can be chosen for HSM A and ARCHIVE ID “2” for HSM B. If you start 3 copytool instances for ARCHIVE ID 1, all of them will use Archive ID “1”. The same rule applies for copytool instances dealing with the HSM B, using Archive ID “2”. -When issuing HSM requests, you can use --archive switch +When issuing HSM requests, you can use the --archive switch to choose the backend you want to use. In this example, file foo will be archived into backend ARCHIVE ID “5”: $ lfs hsm_archive --archive=5 /mnt/lustre/foo -A default ARCHIVE ID can be defined when this switch is not specified: + A default ARCHIVE ID can be defined which will be used when the --archive switch is not specified: $ lctl set_param -P mdt.lustre-MDT0000.hsm.default_archive_id=5 @@ -152,9 +152,9 @@ hsm_state command: A Lustre file system allocates a unique UUID per client mount point, for each filesystem. Only one copytool can be registered for each Lustre mount point. -As a consequence the UUID uniquely identifies a copytool, per filesystem. +As a consequence, the UUID uniquely identifies a copytool, per filesystem. -The currently registered copytool instances (agents UUID) can be retrieved running the following command, per MDT, on MDS nodes: +The currently registered copytool instances (agents UUID) can be retrieved by running the following command, per MDT, on MDS nodes: $ lctl get_param -n mdt.$FSNAME-MDT0000.hsm.agents uuid=a19b2416-0930-fc1f-8c58-c985ba5127ad archive_id=1 requests=[current:0 ok:0 errors:0] @@ -162,13 +162,13 @@ uuid=a19b2416-0930-fc1f-8c58-c985ba5127ad archive_id=1 requests=[current:0 ok:0 The returned fields have the following meaning: - UUID the client mount used by the corresponding copytool. + uuid the client mount used by the corresponding copytool. - archive_id comma-separated list of ARCHIVE ID accessible by this copytool. + archive_id comma-separated list of ARCHIVE IDs accessible by this copytool. - requests various statistics of number of processed requests by this copytool. + requests various statistics on the number of requests processed by this copytool. @@ -196,7 +196,7 @@ able to fully complete a request within this time. The default is 3600 seconds. HSMrequestsRequests - Data management between a Lustre file system and HSM solutions is driven by requests. There are tfive types: + Data management between a Lustre file system and HSM solutions is driven by requests. There are five types: @@ -212,7 +212,7 @@ able to fully complete a request within this time. The default is 3600 seconds. REMOVE Delete the copy of the data from the HSM solution. - CANCEL Cancel an undergoing or pending request. + CANCEL Cancel an in-progress or pending request. @@ -232,7 +232,7 @@ $ lfs hsm_restore FILE1 [FILE2FILE1 [FILE2...] - Requests are sent by default to the default ARCHIVE ID or the specified one (See .) +Requests are sent to the default ARCHIVE ID unless an ARCHIVE ID is specified with the --archive option (See ).
@@ -253,7 +253,7 @@ $ lfs hsm_remove FILE1 [FILE2$ lctl get_param -n mdt.lustre-MDT0000.hsm.actions - The list of request currently being processed by a copytool is available with: + The list of requests currently being processed by a copytool is available with: $ lctl get_param -n mdt.lustre-MDT0000.hsm.active_requests @@ -284,16 +284,16 @@ $ lfs hsm_remove FILE1 [FILE2 -The following options can only be set by root user. +The following options can only be set by the root user. - LOST This file previously archived but the -copy was lost on the HSM solution for some reasons in the backend (for example, + LOST This file was previously archived but the +copy was lost on the HSM solution for some reason in the backend (for example, by a corrupted tape), and could not be restored. If the file is not in the -state of RELEASE it needs to be archived again. If the file -state is in RELEASE, file data is lost. +RELEASE state it needs to be archived again. If the file +is in the RELEASE state, the file data is lost. @@ -329,7 +329,7 @@ $ lfs hsm_clear [FLAGS] FILE1disabled Pause coordinator activity. No new request will be scheduled. No timeout will be handled. New requests will be registered but will be handled only when the coordinator is enabled again. - shutdown Stop coordinator thread. No request could be submitted. + shutdown Stop coordinator thread. No request can be submitted. purge Clear all recorded requests. Do not change coordinator state. @@ -358,7 +358,7 @@ the number of agents. HSMpolicypolicy - Change system behavior. Value could be combined or removed prefixing them by '+' or '-'. + Change system behavior. Values can be added or removed by prefixing them with '+' or '-'. $ lctl set_param mdt.$FSNAME-MDT0000.hsm.policy=+NRA @@ -412,19 +412,19 @@ information (lowest bits first): HE_ARCHIVE = 0 File has been archived. - HE_ARCHIVE = 1 File has been restored. + HE_RESTORE = 1 File has been restored. - HE_ARCHIVE = 2 A request for this file has been canceled. + HE_CANCEL = 2 A request for this file has been canceled. - HE_ARCHIVE = 3 File has been released. + HE_RELEASE = 3 File has been released. - HE_ARCHIVE = 4 A remove request has been executed automatically. + HE_REMOVE = 4 A remove request has been executed automatically. - HE_ARCHIVE = 5 File flags have changed. + HE_STATE = 5 File flags have changed. @@ -437,9 +437,9 @@ information (lowest bits first): - In the above example, 0x280 means error code is 0 and event is HE_STATE. + In the above example, 0x280 means the error code is 0 and the event is HE_STATE. - When using liblustreapi, there is a list of helper functions to easily extract the different values from this bitmask, like: hsm_get_cl_event(), hsm_get_cl_flags(), hsm_get_cl_error() + When using liblustreapi, there is a list of helper functions to easily extract the different values from this bitmask, like: hsm_get_cl_event(), hsm_get_cl_flags(), and hsm_get_cl_error()
@@ -450,7 +450,7 @@ information (lowest bits first): A Lustre file system does not have an internal component responsible for automatically scheduling archive requests and release requests under any conditions (like low space). Automatically scheduling archive operations is the role of the policy engine. - It is recommended that the Policy Engine runs on a dedicated client, similar to an agent node, with a Lustre version 2.5+. + It is recommended that the Policy Engine run on a dedicated client, similar to an agent node, with a Lustre version 2.5+. A policy engine is a userspace program using the Lustre file system HSM specific API to monitor the file system and schedule requests. @@ -465,11 +465,11 @@ information (lowest bits first): Robinhood is a Policy engine and reporting tool for large file systems. It maintains a replicate of file system medatada in a database that can be queried at will. Robinhood makes it possible to schedule mass action on -file system entries by defining attribute-based policies, provides fast find -and du enhanced clones, gives to administrators an overall -view of file system content through a web user interface and command line tools. +file system entries by defining attribute-based policies, provides fast find +and du enhanced clones, and provides administrators with an overall +view of file system content through a web interface and command line tools. -Robinhood can be used for various configuration. Robinhood is an external project and further information can be found on the website: https://sourceforge.net/apps/trac/robinhood/wiki/Doc. +Robinhood can be used for various configurations. Robinhood is an external project, and further information can be found on the project website: https://sourceforge.net/apps/trac/robinhood/wiki/Doc. -- 1.8.3.1