1 Connections Between Lustre Entities
2 -----------------------------------
5 The Lustre protocol is connection-based in that each two entities
6 maintain shared, coordinated state information. The most common
7 example of two such entities are a client and a target on some
8 server. The target is identified by name to the client through an
9 interaction with the management server. The client then 'connects' to
10 the given target on the indicated server by sending the appropriate
11 version of the *_CONNECT message (MGS_CONNECT, MDS_CONNECT, or
12 OST_CONNECT - colectively *_CONNECT) and receiving back the
13 corresponding *_CONNECT reply. The server creates an 'export' for the
14 connection between the target and the client, and the export holds the
15 server state information for that connection. When the client gets the
16 reply it creates an 'import', and the import holds the client state
17 information for that connection. Note that if a server has N targets
18 and M clients have connected to them, the server will have N x M
19 exports and each client will have N imports.
21 There are also connections between the servers: Each MDS and OSS has a
22 connection to the MGS, where the MDS (respectively the OSS) plays the
23 role of the client in the above discussion. That is, the MDS initiates
24 the connection and has an import for the MGS, while the MGS has an
25 export for each MDS. Each MDS connects to each OST, with an import on
26 the MDS and an export on the OSS. This connection supports requests
27 from the MDS to the OST to create and destroy data objects, to set
28 attributes (such as permission bits), and for 'statfs' information for
29 precreation needs. Each OSS also connects to the first MDS to get
30 access to auxiliary services, with an import on the OSS and an export
31 on the first MDS. The auxiliary services are: the File ID Location
32 Database (FLDB), the quota master service, and the sequence
33 controller. This connection for auxiliary services is a 'lightweight'
34 one in that it has no replay functionality and consumes no space on
35 the MDS for client data. Each MDS connects also to all other MDSs for
38 Finally, for some communications the roles of message initiation and
39 message reply are reversed. This is the case, for instance, with
40 call-back operations. In that case the entity which would normally
41 have an import has, instead, a 'reverse-export' and the
42 other end of the connection maintains a 'reverse-import'. The
43 reverse-import uses the same structure as a regular import, and the
44 reverse-export uses the same structure as a regular export.
52 An 'obd_connect_data' structure accompanies every connect operation in
53 both the request message and in the reply message.
56 struct obd_connect_data {
57 __u64 ocd_connect_flags;
58 __u32 ocd_version; /* OBD_CONNECT_VERSION */
59 __u32 ocd_grant; /* OBD_CONNECT_GRANT */
60 __u32 ocd_index; /* OBD_CONNECT_INDEX */
61 __u32 ocd_brw_size; /* OBD_CONNECT_BRW_SIZE */
62 __u64 ocd_ibits_known; /* OBD_CONNECT_IBITS */
63 __u8 ocd_blocksize; /* OBD_CONNECT_GRANT_PARAM */
64 __u8 ocd_inodespace; /* OBD_CONNECT_GRANT_PARAM */
65 __u16 ocd_grant_extent; /* OBD_CONNECT_GRANT_PARAM */
67 __u64 ocd_transno; /* OBD_CONNECT_TRANSNO */
68 __u32 ocd_group; /* OBD_CONNECT_MDS */
69 __u32 ocd_cksum_types; /* OBD_CONNECT_CKSUM */
70 __u32 ocd_max_easize; /* OBD_CONNECT_MAX_EASIZE */
72 __u64 ocd_maxbytes; /* OBD_CONNECT_MAXBYTES */
91 The 'ocd_connect_flags' field encodes the connect flags giving the
92 capabilities of a connection between client and target. Several of
93 those flags (noted in comments above and the discussion below)
94 actually control whether the remaining fields of 'obd_connect_data'
95 get used. The [[connect-flags]] flags are:
98 #define OBD_CONNECT_RDONLY 0x1ULL /*client has read-only access*/
99 #define OBD_CONNECT_INDEX 0x2ULL /*connect specific LOV idx */
100 #define OBD_CONNECT_MDS 0x4ULL /*connect from MDT to OST */
101 #define OBD_CONNECT_GRANT 0x8ULL /*OSC gets grant at connect */
102 #define OBD_CONNECT_SRVLOCK 0x10ULL /*server takes locks for cli */
103 #define OBD_CONNECT_VERSION 0x20ULL /*Lustre versions in ocd */
104 #define OBD_CONNECT_REQPORTAL 0x40ULL /*Separate non-IO req portal */
105 #define OBD_CONNECT_ACL 0x80ULL /*access control lists */
106 #define OBD_CONNECT_XATTR 0x100ULL /*client use extended attr */
107 #define OBD_CONNECT_CROW 0x200ULL /*MDS+OST create obj on write*/
108 #define OBD_CONNECT_TRUNCLOCK 0x400ULL /*locks on server for punch */
109 #define OBD_CONNECT_TRANSNO 0x800ULL /*replay sends init transno */
110 #define OBD_CONNECT_IBITS 0x1000ULL /*support for inodebits locks*/
111 #define OBD_CONNECT_JOIN 0x2000ULL /*files can be concatenated.
112 *We do not support JOIN FILE
113 *anymore, reserve this flags
114 *just for preventing such bit
116 #define OBD_CONNECT_ATTRFID 0x4000ULL /*Server can GetAttr By Fid*/
117 #define OBD_CONNECT_NODEVOH 0x8000ULL /*No open hndl on specl nodes*/
118 #define OBD_CONNECT_RMT_CLIENT 0x10000ULL /*Remote client */
119 #define OBD_CONNECT_RMT_CLIENT_FORCE 0x20000ULL /*Remote client by force */
120 #define OBD_CONNECT_BRW_SIZE 0x40000ULL /*Max bytes per rpc */
121 #define OBD_CONNECT_QUOTA64 0x80000ULL /*Not used since 2.4 */
122 #define OBD_CONNECT_MDS_CAPA 0x100000ULL /*MDS capability */
123 #define OBD_CONNECT_OSS_CAPA 0x200000ULL /*OSS capability */
124 #define OBD_CONNECT_CANCELSET 0x400000ULL /*Early batched cancels. */
125 #define OBD_CONNECT_SOM 0x800000ULL /*Size on MDS */
126 #define OBD_CONNECT_AT 0x1000000ULL /*client uses AT */
127 #define OBD_CONNECT_LRU_RESIZE 0x2000000ULL /*LRU resize feature. */
128 #define OBD_CONNECT_MDS_MDS 0x4000000ULL /*MDS-MDS connection */
129 #define OBD_CONNECT_REAL 0x8000000ULL /*real connection */
130 #define OBD_CONNECT_CHANGE_QS 0x10000000ULL /*Not used since 2.4 */
131 #define OBD_CONNECT_CKSUM 0x20000000ULL /*support several cksum algos*/
132 #define OBD_CONNECT_FID 0x40000000ULL /*FID is supported by server */
133 #define OBD_CONNECT_VBR 0x80000000ULL /*version based recovery */
134 #define OBD_CONNECT_LOV_V3 0x100000000ULL /*client supports LOV v3 EA */
135 #define OBD_CONNECT_GRANT_SHRINK 0x200000000ULL /* support grant shrink */
136 #define OBD_CONNECT_SKIP_ORPHAN 0x400000000ULL /* don't reuse orphan objids */
137 #define OBD_CONNECT_MAX_EASIZE 0x800000000ULL /* preserved for large EA */
138 #define OBD_CONNECT_FULL20 0x1000000000ULL /* it is 2.0 client */
139 #define OBD_CONNECT_LAYOUTLOCK 0x2000000000ULL /* client uses layout lock */
140 #define OBD_CONNECT_64BITHASH 0x4000000000ULL /* client supports 64-bits
142 #define OBD_CONNECT_MAXBYTES 0x8000000000ULL /* max stripe size */
143 #define OBD_CONNECT_IMP_RECOV 0x10000000000ULL /* imp recovery support */
144 #define OBD_CONNECT_JOBSTATS 0x20000000000ULL /* jobid in ptlrpc_body */
145 #define OBD_CONNECT_UMASK 0x40000000000ULL /* create uses client umask */
146 #define OBD_CONNECT_EINPROGRESS 0x80000000000ULL /* client handles -EINPROGRESS
147 * RPC error properly */
148 #define OBD_CONNECT_GRANT_PARAM 0x100000000000ULL/* extra grant params used for
149 * finer space reservation */
150 #define OBD_CONNECT_FLOCK_OWNER 0x200000000000ULL /* for the fixed 1.8
151 * policy and 2.x server */
152 #define OBD_CONNECT_LVB_TYPE 0x400000000000ULL /* variable type of LVB */
153 #define OBD_CONNECT_NANOSEC_TIME 0x800000000000ULL /* nanosecond timestamps */
154 #define OBD_CONNECT_LIGHTWEIGHT 0x1000000000000ULL/* lightweight connection */
155 #define OBD_CONNECT_SHORTIO 0x2000000000000ULL/* short io */
156 #define OBD_CONNECT_PINGLESS 0x4000000000000ULL/* pings not required */
157 #define OBD_CONNECT_FLOCK_DEAD 0x8000000000000ULL/* deadlock detection */
158 #define OBD_CONNECT_DISP_STRIPE 0x10000000000000ULL/* create stripe disposition*/
159 #define OBD_CONNECT_OPEN_BY_FID 0x20000000000000ULL /* open by fid won't pack
163 Each flag corresponds to a particular capability that the client and
164 target together will honor. A client will send a message including
165 some subset of these capabilities during a connection request to a
166 specific target. It tells the server what capabilities it has. The
167 server then replies with the subset of those capabilities it agrees to
168 honor (for the given target).
170 If the OBD_CONNECT_VERSION flag is set then the 'ocd_version' field is
171 honored. The 'ocd_version' gives an encoding of the Lustre
172 version. For example, Version 2.7.32 would be hexadecimal number
175 If the OBD_CONNECT_GRANT flag is set then the 'ocd_grant' field is
176 honored. The 'ocd_grant' value in a reply (to a connection request)
177 sets the client's grant.
179 If the OBD_CONNECT_INDEX flag is set then the 'ocd_index' field is
180 honored. The 'ocd_index' value is set in a reply to a connection
181 request. It holds the LOV index of the target.
183 If the OBD_CONNECT_BRW_SIZE flag is set then the 'ocd_brw_size' field
184 is honored. The 'ocd_brw_size' value sets the size of the maximum
185 supported RPC. The client proposes a value in its connection request,
186 and the server's reply will either agree or further limit the size.
188 If the OBD_CONNECT_IBITS flag is set then the 'ocd_ibits_known' field
189 is honored. The 'ocd_ibits_known' value determines the handling of
190 locks on inodes. See the discussion of inodes and extended attributes.
192 If the OBD_CONNECT_GRANT_PARAM flag is set then the 'ocd_blocksize',
193 'ocd_inodespace', and 'ocd_grant_extent' fields are honored. A server
194 reply uses the 'ocd_blocksize' value to inform the client of the log
195 base two of the size in bytes of the backend file system's blocks.
197 A server reply uses the 'ocd_inodespace' value to inform the client of
198 the log base two of the size of an inode.
200 Under some circumstances (for example when ZFS is the back end file
201 system) there may be additional overhead in handling writes for each
202 extent. The server uses the 'ocd_grant_extent' value to inform the
203 client of the size in bytes consumed from its grant on the server when
204 creating a new file. The client uses this value in calculating how
205 much dirty write cache it has and whether it has reached the limit
206 established by the target's grant.
208 If the OBD_CONNECT_TRANSNO flag is set then the 'ocd_transno' field is
209 honored. A server uses the 'ocd_transno' value during recovery to
210 inform the client of the transaction number at which it should begin
213 If the OBD_CONNECT_MDS flag is set then the 'ocd_group' field is
214 honored. When an MDT connects to an OST the 'ocd_group' field informs
215 the OSS of the MDT's index. Objects on that OST for that MDT will be
216 in a common namespace served by that MDT.
218 If the OBD_CONNECT_CKSUM flag is set then the 'ocd_cksum_types' field
219 is honored. The client uses the 'ocd_checksum_types' field to propose
220 to the server the client's available (presumably hardware assisted)
221 checksum mechanisms. The server replies with the checksum types it has
222 available. Finally, the client will employ the fastest of the agreed
225 If the OBD_CONNECT_MAX_EASIZE flag is set then the 'ocd_max_easize'
226 field is honored. The server uses 'ocd_max_easize' to inform the
227 client about the amount of space that can be allocated in each inode
228 for extended attributes. The 'ocd_max_easize' specifically refers to
229 the space used for striping information. This allows the client to
230 determine the maximum layout size (and hence stripe count) that can be
233 The 'ocd_instance' field (alone) is not governed by an OBD_CONNECT_*
234 flag. The MGS uses the 'ocd_instance' value in its reply to a
235 connection request to inform the server and target of the "era" of its
236 connection. The MGS initializes the era value for each server to zero
237 and increments that value every time the target connects. This
238 supports imperative recovery.
240 If the OBD_CONNECT_MAXBYTES flag is set then the 'ocd_maxbytes' field
241 is honored. An OSS uses the 'ocd_maxbytes' value to inform the client
242 of the maximum OST object size for this target. A stripe on any OST
243 for a multi-striped file cannot be larger than the minimum maxbytes
246 The additional space in the 'obd_connect_data' structure is unused and
247 reserved for future use.
249 Other OBD_CONNECT_* flags have no corresponding field in
250 obd_connect_data but still control client-server supported features.
252 If the OBD_CONNECT_RDONLY flag is set then the client is mounted in
253 read-only mode and the server honors that by denying any modification
256 If the OBD_CONNECT_SRVLOCK flag is set then the client and server
257 support lockless IO. The server will take locks for client IO requests
258 with the OBD_BRW_SRVLOCK flag in the 'niobuf_remote' structure
259 flags. This is used for Direct IO. The client takes no LDLM lock and
260 delegates locking to the server.
262 If the OBD_CONNECT_ACL flag is set then the server supports the ACL
263 mount option for its filesystem. The client supports this mount option
264 as well, in that case.
266 If the OBD_CONNECT_XATTR flag is set then the server supports user
267 extended attributes. This is defined by the mount options of the
268 servers of the backend file systems and is reflected on the client
269 side by the same mount option but for the Lustre file system itself.
271 If the OBD_CONNECT_TRUNCLOCK flag is set then the client and the
272 server support lockless truncate. This is realized in an OST_PUNCH RPC
273 by setting the 'obdo' sturcture's 'o_flag' field to include the
274 OBD_FL_SRVLOCK. In that circumstance the client takes no lock, and the
275 server must take a lock on the resource.
277 If the OBD_CONNECT_ATTRFID flag is set then the server supports
278 getattr requests by FID of file instead of name. This reduces
279 unnecessary RPCs for DNE.
281 If the OBD_CONNECT_NODEVOH flag is set then the server provides no
282 open handle for special inodes.
284 fixme: finish with the rest of flags
286 The remaining flags are obsoleted and not used nowadays.
288 The OBD_CONNECT_REQPORTAL was used to specify that client may use
289 OST_REQUEST_PORTAL for requests to don't interfere with IO portal,
290 e.g. for MDS-OST interaction. Now it is default request portal for OSC
291 and this flag does nothing though it is still set on client side
292 during connection process.
294 The OBD_CONNECT_CROW flag was used for create-on-write functionality
295 on OST, when data objects were created upon first write from the
296 client. This wasn't implemented because of complex recovery problems.
298 The OBD_CONNECT_SOM flag was used to signal that MDS is capable to
299 store file size in file attributes, so client may get it directly from
300 MDS avoiding glimpse request to OSTs. This feature was implemented as
301 demo feature and wasn't enabled by default. Finally it was disabled in
302 Lustre 2.7 because it causes quite complex recovery cases to handle
303 with relatevely small benefits.
305 The OBD_CONNECT_JOIN flag was used for the 'join files' feature, which
306 allowed files to be concatenated. Lustre no longer supports that
309 fixme: finish with rest of unused flags
315 #define IMP_STATE_HIST_LEN 16
316 struct import_state_hist {
317 enum lustre_imp_state ish_state;
321 struct portals_handle imp_handle;
322 atomic_t imp_refcount;
323 struct lustre_handle imp_dlm_handle;
324 struct ptlrpc_connection *imp_connection;
325 struct ptlrpc_client *imp_client;
326 cfs_list_t imp_pinger_chain;
327 cfs_list_t imp_zombie_chain;
328 cfs_list_t imp_replay_list;
329 cfs_list_t imp_sending_list;
330 cfs_list_t imp_delayed_list;
331 cfs_list_t imp_committed_list;
332 cfs_list_t *imp_replay_cursor;
333 struct obd_device *imp_obd;
334 struct ptlrpc_sec *imp_sec;
335 struct mutex imp_sec_mutex;
336 cfs_time_t imp_sec_expire;
337 wait_queue_head_t imp_recovery_waitq;
338 atomic_t imp_inflight;
339 atomic_t imp_unregistering;
340 atomic_t imp_replay_inflight;
341 atomic_t imp_inval_count;
342 atomic_t imp_timeouts;
343 enum lustre_imp_state imp_state;
344 struct import_state_hist imp_state_hist[IMP_STATE_HIST_LEN];
345 int imp_state_hist_idx;
348 int imp_last_generation_checked;
349 __u64 imp_last_replay_transno;
350 __u64 imp_peer_committed_transno;
351 __u64 imp_last_transno_checked;
352 struct lustre_handle imp_remote_handle;
353 cfs_time_t imp_next_ping;
354 __u64 imp_last_success_conn;
355 cfs_list_t imp_conn_list;
356 struct obd_import_conn *imp_conn_current;
365 imp_server_timeout:1,
366 imp_delayed_recovery:1,
367 imp_no_lock_replay:1,
370 imp_force_next_verify:1,
373 imp_no_pinger_recover:1,
375 imp_force_reconnect:1,
377 __u32 imp_connect_op;
378 struct obd_connect_data imp_connect_data;
379 __u64 imp_connect_flags_orig;
380 int imp_connect_error;
382 __u32 imp_msghdr_flags; /* adjusted based on server capability */
383 struct ptlrpc_request_pool *imp_rq_pool; /* emergency request pool */
384 struct imp_at imp_at; /* adaptive timeout data */
385 time_t imp_last_reply_time; /* for health check */
389 The 'imp_handle' value is the unique id for the import, and is used as
390 a hash key to gain access to it. It is not used in any of the Lustre
391 protocol messages, but rather is just for internal reference.
393 The 'imp_refcount' is also for internal use. The value is incremented
394 with each RPC created, and decremented as the request is freed. When
395 the reference count is zero the import can be freed, as when the
396 target is being disconnected.
398 The 'imp_dlm_handle' is a reference to the LDLM export for this
401 There can be multiple paths through the network to a given
402 target, in which case there would be multiple 'obd_import_conn' items
403 on the 'imp_conn_list'. Each 'obd_imp_conn' includes a
404 'ptlrpc_connection', so 'imp_connection' points to the one that is
407 The 'imp_client' identifies the (local) portals for sending and
408 receiving messages as well as the client's name. The information is
409 specific to either an MDC or an OSC.
411 The 'imp_ping_chain' places the import on a linked list of imports
412 that need periodic pings.
414 The 'imp_zombie_chain' places the import on a list ready for being
415 freed. Unused imports ('imp_refcount' is zero) are deleted
416 asynchronously by a garbage collecting process.
418 In order to support recovery the client must keep requests that are in
419 the process of being handled by the target. The target replies to a
420 request as soon as the target has made its local update to
421 memory. When the client receives that reply the request is put on the
422 'imp_replay_list'. In the event of a failure (target crash, lost
423 message) this list is then replayed for the target during the recovery
424 process. When a request has been sent but has not yet received a reply
425 it is placed on the 'imp_sending_list'. In the event of a failure
426 those will simply be replayed after any recovery has been
427 completed. Finally, there may be requests that the client is delaying
428 before it sends them. This can happen if the client is in a degraded
429 mode, as when it is in recovery after a failure. These requests are
430 put on the 'imp_delayed_list' and not processed until recovery is
431 complete and the 'imp_sending_list' has been replayed.
433 In order to support recovery 'open' requests must be preserved even
434 after they have completed. Those requests are placed on the
435 'imp_committed_list' and the 'imp_replay_cursor' allows for
436 accelerated access to those items.
438 The 'imp_obd' is a reference to the details about the target device
439 that is the subject of this import. There is a lot of state info in
440 there along with many implementation details that are not relevant to
441 the actual Lustre protocol. fixme: I'll want to go through all of the
442 fields in that structure to see which, if any need more
445 The security policy and settings are kept in 'imp_sec', and
446 'imp_sec_mutex' helps manage access to that info. The 'imp_sec_expire'
447 setting is in support of security policies that have an expiration
450 Some processes may need the import to be in a fully connected state in
451 order to proceed. The 'imp_recovery_waitq' is where those threads will
452 wait during recovery.
454 The 'imp_inflight' field counts the number of in-flight requests. It
455 is incremented with each request sent and decremented with each reply
458 The client reserves buffers for the processing of requests and
459 replies, and then informs LNet about those buffers. Buffers may get
460 reused during subsequent processing, but then a point may come when
461 the buffer is no longer going to be used. The client increments the
462 'imp_unregistering' counter and informs LNet the buffer is no longer
463 needed. When LNet has freed the buffer it will notify the client and
464 then the 'imp_unregistering' can be decremented again.
466 During recovery the 'imp_reply_inflight' counts the number of requests
467 from the reply list that have been sent and have not been replied to.
469 The 'imp_inval_count' field counts how many threads are in the process
470 of cleaning up this connection or waiting for cleanup to complete. The
471 cleanup itself may be needed in the case there is an eviction or other
472 problem (fixme what other problem?). The cleanup may involve freeing
473 allocated resources, updating internal state, running replay lists,
474 and invalidating cache. Since it could take a while there may end up
475 multiple threads waiting on this process to complete.
477 The 'imp_timeout' field is a counter that is incremented every time
478 there is a timeout in communication with the target.
480 The 'imp_state' tracks the state of the import. It draws from the
481 enumerated set of values:
483 .enum_lustre_imp_state
487 | LUSTRE_IMP_CLOSED | 1
489 | LUSTRE_IMP_DISCON | 3
490 | LUSTRE_IMP_CONNECTING | 4
491 | LUSTRE_IMP_REPLAY | 5
492 | LUSTRE_IMP_REPLAY_LOCKS | 6
493 | LUSTRE_IMP_REPLAY_WAIT | 7
494 | LUSTRE_IMP_RECOVER | 8
495 | LUSTRE_IMP_FULL | 9
496 | LUSTRE_IMP_EVICTED | 10
498 fixme: what are the transitions between these states? The
499 'imp_state_hist' array maintains a list of the last 16
500 (IMP_STATE_HIST_LEN) states the import was in, along with the time it
501 entered each (fixme: or is it when it left that state?). The list is
502 maintained in a circular manner, so the 'imp_state_hist_idx' points to
503 the entry in the list for the most recently visited state.
505 The 'imp_generation' and 'imp_conn_cnt' fields are monotonically
506 increasing counters. Every time a connection request is sent to the
507 target the 'imp_conn_cnt' counter is incremented, and every time a
508 reply is received for the connection request the 'imp_generation'
509 counter is incremented.
511 The 'imp_last_generation_checked' implements an optimization. When a
512 replay process has successfully traversed the reply list the
513 'imp_generation' value is noted here. If the generation has not
514 incremented then the replay list does not need to be traversed again.
516 During replay the 'imp_last_replay_transno' is set to the transaction
517 number of the last request being replayed, and
518 'imp_peer_committed_transno is set to the 'pb_last_committed' value
519 (of the 'ptlrpc_body') from replies if that value is higher than the
520 previous 'imp_peer_committed_transno'. The 'imp_last_transno_checked'
521 field implements an optimization. It is set to the
522 'imp_last_replay_transno' as its replay is initiated. If
523 'imp_last_transno_checked' is still 'imp_last_replay_transno' and
524 'imp_generation' is still 'imp_last_generation_checked' then there
525 are no additional requests ready to be removed from the replay
526 list. Furthermore, 'imp_last_transno_checked' may no longer be needed,
527 since the committed transactions are now maintained on a separate list.
529 The 'imp_remote_handle' is the handle sent by the target in a
530 connection reply message to uniquely identify the export for this
531 target and client that is maintained on the server. This is the handle
532 used in all subsequent messages to the target.
534 There are two separate ping intervals (fixme: what are the
535 values?). If there are no uncommitted messages for the target then the
536 default ping interval is used to set the 'imp_next_ping' to the time
537 the next ping needs to be sent. If there are uncommitted requests then
538 a "short interval" is used to set the time for the next ping.
540 The 'imp_last_success_conn' value is set to the time of the last
541 successful connection. fixme: The source says it is in 64 bit
542 jiffies, but does not further indicate how that value is calculated.
544 Since there can actually be multiple connection paths for a target
545 (due to failover or multihomed configurations) the import maintains a
546 list of all the possible connection paths in the list pointed to by
547 the 'imp_conn_list' field. The 'imp_conn_current' points to the one
548 currently in use. Compare with the 'imp_connection' fields. They point
549 to different structures, but each is reachable from the other.
551 Most of the flag, state, and list information in the import needs to
552 be accessed atomically. The 'imp_lock' is used to maintain the
553 consistency of the import while it is manipulated by multiple threads.
555 The various flags are documented in the source code and are largely
556 obvious from those short comments, reproduced here:
562 | imp_no_timeout | timeouts are disabled
563 | imp_invalid | client has been evicted
564 | imp_deactive | client administratively disabled
565 | imp_replayable | try to recover the import
566 | imp_dlm_fake | don't run recovery (timeout instead)
567 | imp_server_timeout | use 1/2 timeout on MDSs and OSCs
568 | imp_delayed_recovery | VBR: imp in delayed recovery
569 | imp_no_lock_replay | VBR: if gap was found then no lock replays
570 | imp_vbr_failed | recovery by versions was failed
571 | imp_force_verify | force an immidiate ping
572 | imp_force_next_verify | force a scheduled ping
573 | imp_pingable | target is pingable
574 | imp_resend_replay | resend for replay
575 | imp_no_pinger_recover | disable normal recovery, for test only.
576 | imp_need_mne_swab | need IR MNE swab
577 | imp_force_reconnect | import must be reconnected, not new connection
578 | imp_connect_tried | import has tried to connect with server
580 A few additional notes are in order. The 'imp_dlm_fake' flag signifies
581 that this is not a "real" import, but rather it is a "reverse"import
582 in support of the LDLM. When the LDLM invokes callback operations the
583 messages are initiated at the other end, so there need to a fake
584 import to receive the replies from the operation. Prior to the
585 introduction of adaptive timeouts the servers were given fixed timeout
586 value that were half those used for the clients. The
587 'imp_server_timeout' flag indicated that the import should use the
588 half-sized timeouts, but with the introduction of adaptive timeouts
589 this facility is no longer used. "VBR" is "version based recovery",
590 and it introduces a new possibility for handling requests. Previously,
591 f there were a gap in the transaction number sequence the the requests
592 associated with the missing transaction numbers would be
593 discarded. With VBR those transaction only need to be discarded if
594 there is an actual dependency between the ones that were skipped and
595 the currently latest committed transaction number. fixme: What are the
596 circumstances that would lead to setting the 'imp_force_next_verify'
597 or 'imp_pingable' flags? During recovery, the client sets the
598 'imp_no_pinger_recover' flag, which tells the process to proceed from
599 the current value of 'imp_replay_last_transno'. The
600 'imp_need_mne_swab' flag indicates a version dependent circumstance
601 where swabbing was inadvertently left out of one processing step.
607 An 'obd_export' structure for a given target is created on a server
608 for each client that connects to that target. The exports for all the
609 clients for a given target are managed together. The export represents
610 the connection state between the client and target as well as the
611 current state of any ongoing activity. Thus each pending request will
612 have a reference to the export. The export is discarded if the
613 connection goes away, but only after all the references to it have
614 been cleaned up. The state information for each export is also
615 maintained on disk. In the event of a server failure, that or another
616 server can read the export date from disk to enable recovery.
620 struct portals_handle exp_handle;
621 atomic_t exp_refcount;
622 atomic_t exp_rpc_count;
623 atomic_t exp_cb_count;
624 atomic_t exp_replay_count;
625 atomic_t exp_locks_count;
626 #if LUSTRE_TRACKS_LOCK_EXP_REFS
627 cfs_list_t exp_locks_list;
628 spinlock_t exp_locks_list_guard;
630 struct obd_uuid exp_client_uuid;
631 cfs_list_t exp_obd_chain;
632 cfs_hlist_node_t exp_uuid_hash;
633 cfs_hlist_node_t exp_nid_hash;
634 cfs_list_t exp_obd_chain_timed;
635 struct obd_device *exp_obd;
636 struct obd_import *exp_imp_reverse;
637 struct nid_stat *exp_nid_stats;
638 struct ptlrpc_connection *exp_connection;
640 cfs_hash_t *exp_lock_hash;
641 cfs_hash_t *exp_flock_hash;
642 cfs_list_t exp_outstanding_replies;
643 cfs_list_t exp_uncommitted_replies;
644 spinlock_t exp_uncommitted_replies_lock;
645 __u64 exp_last_committed;
646 cfs_time_t exp_last_request_time;
647 cfs_list_t exp_req_replay_queue;
649 struct obd_connect_data exp_connect_data;
650 enum obd_option exp_flags;
658 exp_req_replay_needed:1,
659 exp_lock_replay_needed:1,
665 enum lustre_sec_part exp_sp_peer;
666 struct sptlrpc_flavor exp_flvr;
667 struct sptlrpc_flavor exp_flvr_old[2];
668 cfs_time_t exp_flvr_expire[2];
669 spinlock_t exp_rpc_lock;
670 cfs_list_t exp_hp_rpcs;
671 cfs_list_t exp_reg_rpcs;
672 cfs_list_t exp_bl_list;
673 spinlock_t exp_bl_list_lock;
675 struct tg_export_data eu_target_data;
676 struct mdt_export_data eu_mdt_data;
677 struct filter_export_data eu_filter_data;
678 struct ec_export_data eu_ec_data;
679 struct mgs_export_data eu_mgs_data;
681 struct nodemap *exp_nodemap;
685 The 'exp_handle' is a little extra information as compared with a
686 'struct lustre_handle', which is just the cookie. The cookie that the
687 server generates to uniquely identify this connection gets put into
688 this structure along with their information about the device in
689 question. This is the cookie the *_CONNECT reply sends back to the
690 client and is then stored int he client's import.
692 The 'exp_refcount' gets incremented whenever some aspect of the export
693 is "in use". The arrival of an otherwise unprocessed message for this
694 target will increment the refcount. A reference by an LDLM lock that
695 gets taken will increment the refcount. Callback invocations and
696 replay also lead to incrementing the 'ref_count'. The next four fields
697 - 'exp_rpc_count', exp_cb_count', and 'exp_replay_count', and
698 'exp_locks_count' - all subcategorize the 'exp_refcount'. The
699 reference counter keeps the export alive while there are any users of
700 that export. The reference counter is also used for debug
701 purposes. Similarly, the 'exp_locks_list' and 'exp_locks_list_guard'
702 are further debug info that list the actual locks accounted for in
705 The 'exp_client_uuid' gives the UUID of the client connected to this
706 export. Fixme: when and how does the UUID get generated?
708 The server maintains all the exports for a given target on a circular
709 list. Each export's place on that list is maintained in the
710 'exp_obd_chain'. A common activity is to look up the export based on
711 the UUID or the nid of the client, and the 'exp_uuid_hash' and
712 'exp_nid_hash' fields maintain this export's place in hashes
713 constructed for that purpose.
715 Exports are also maintained on a list sorted by the last time the
716 corresponding client was heard from. The 'exp_obd_chain_timed' field
717 maintains the export's place on that list. When a message arrives from
718 the client the time is "now" so the export gets put at the end of the
719 list. Since it is circular, the next export is then the oldest. If it
720 has not been heard of within its timeout interval that export is
721 marked for later eviction.
723 The 'exp_obd' points to the 'obd_device' structure for the device that
724 is the target of this export.
726 In the event of an LDLM call-back the export needs to have a the ability to
727 initiate messages back to the client. The 'exp_imp_reverse' provides a
728 "reverse" import that manages this capability.
730 The '/proc' stats for the export (and the target) get updated via the
733 The 'exp_connection' points to the connection information for this
734 export. This is the information about the actual networking pathway(s)
735 that get used for communication.
738 The 'exp_conn_cnt' notes the connection count value from the client at
739 the time of the connection. In the event that more than one connection
740 request is issued before the connection is established then the
741 'exp_conn_cnt' will list the highest value. If a previous connection
742 attempt (with a lower value) arrives later it may be safely
743 discarded. Every request lists its connection count, so non-connection
744 requests with lower connection count values can also be discarded.
745 Note that this does not count how many times the client has connected
746 to the target. If a client is evicted the export is deleted once it
747 has been cleaned up and its 'exp_ref_count' reduced to zero. A new
748 connection from the client will get a new export.
750 The 'exp_lock_hash' provides access to the locks granted to the
751 corresponding client for this target. If a lock cannot be granted it
752 is discarded. A file system lock ("flock") is also implemented through
753 the LDLM lock system, but not all LDLM locks are flocks. The ones that
754 are flocks are gathered in a hash 'exp_flock_hash'. This supports
757 For those requests that initiate file system modifying transactions
758 the request and its attendant locks need to be preserved until either
759 a) the client acknowleges recieving the reply, or b) the transaction
760 has been committed locally. This ensures a request can be replayed in
761 the event of a failure. The LDLM lock is being kept until one of these
762 event occurs to prevent any other modifications of the same object.
763 The reply is kept on the 'exp_outstanding_replies' list until the LNet
764 layer notifies the server that the reply has been acknowledged. A reply
765 is kept on the 'exp_uncommitted_replies' list until the transaction
766 (if any) has been committed.
768 The 'exp_last_committed' value keeps the transaction number of the
769 last committed transaction. Every reply to a client includes this
770 value as a means of early-as-possible notification of transactions that
773 The 'exp_last_request_time' is self explanatory.
775 During reply a request that is waiting for reply is maintained on the
776 list 'exp_req_replay_queue'.
778 The 'exp_lock' spin-lock is used for access control to the exports
779 flags, as well as the 'exp_outstanding_replies' list and the revers
782 The 'exp_connect_data' refers to an 'obd_connect_data' structure for
783 the connection established between this target and the client this
784 export refers to. See also the corresponding entry in the import and
785 in the connect messages passed between the hosts.
787 The 'exp_flags' field encodes three directives as follows:
790 OBD_OPT_FORCE = 0x0001,
791 OBD_OPT_FAILOVER = 0x0002,
792 OBD_OPT_ABORT_RECOV = 0x0004,
795 fixme: Are the set for some exports and a condition of their
796 existence? or do they reflect a transient state the export is passing
799 The 'exp_failed' flag gets set whenever the target has failed for any
800 reason or the export is otherwise due to be cleaned up. Once set it
801 will not be unset in this export. Any subsequent connection between
802 the client and the target would be governed by a new export.
804 After a failure export data is retrieved from disk and the exports
805 recreated. Exports created in this way will have their
806 'exp_in_recovery' flag set. Once any outstanding requests and locks
807 have been recovered for the client, then the export is recovered and
808 'exp_in_recovery' can be cleared. When all the client exports for a
809 given target have been recovered then the target is considered
810 recovered, and when all targets have been recovered the server is
811 considered recovered.
813 A *_DISCONNECT message from the client will set the 'exp_disconnected'
814 flag, as will any sort of failure of the target. Once set the export
815 will be cleaned up and deleted.
817 When a *_CONNECT message arrives the 'exp_connecting' flag is set. If
818 for some reason a second *_CONNECT request arrives from the client it can
819 be discarded when this flag is set.
821 The 'exp_delayed' flag is no longer used. In older code it indicated
822 that recovery had not completed in a timely fashion, but that a tardy
823 recovery would still be possible, since there were no dependencies on
826 The 'exp_vbr_failed' flag indicates a failure during the recovery
827 process. See <<recovery>> for a more detailed discussion of recovery
828 and transaction replay. For a file system modifying request, the
829 server composes its reply including the 'pb_pre_versions' entries in
830 'ptlrpc_body', which indicate the most recent updates to the
831 object. The client updates the request with the 'pb_transno' and
832 'pb_pre_versions' from the reply, and keeps that request until the
833 target signals that the transaction has been committed to disk. If the
834 client times-out without that confirmation then it will 'replay' the
835 request, which now includes the 'pb_pre_versions' information. During
836 a replay the target checks that the object has the same version as
837 'pb_pre_versions' in replay. If this check fails then the object can't
838 be restored in the same state as it was in before failure. Usually that
839 happens if the recovery process fails for the connection between some
840 other client and this target, so part of change needed for this client
841 wasn't restored. At that point the 'exp_vbr_failed' flag is set
842 to indicate version based recovery failed. This will lead to the client
843 being evicted and this export being cleaned up and deleted.
845 At the start of recovery both the 'exp_req_replay_needed' and
846 'exp_lock_replay_needed' flags are set. As request replay is completed
847 the 'exp_req_replay_needed' flag is cleared. As lock replay is
848 completed the 'exp_lock_replay_needed' flag is cleared. Once both are
849 cleared the 'exp_in_recovery' flag can be cleared.
851 The 'exp_need_sync' supports an optimization. At mount time it is
852 likely that every client (potentially thousands) will create an export
853 and that export will need to be saved to disk synchronously. This can
854 lead to an unusually high and poorly performing interaction with the
855 disk. When the export is created the 'exp_need_sync' flag is set and
856 the actual writing to disk is delayed. As transactions arrive from
857 clients (in a much less coordinated fashion) the 'exp_need_sync' flag
858 indicates a need to have the export data on disk before proceeding
859 with a new transaction, so as it is next updated the transaction is
860 done synchronously to commit all changes on disk. At that point the
861 flag is cleared (except see below).
863 In DNE (phase I) the export for an MDT managing the connection from
864 another MDT will want to always keep the 'exp_need_sync' flag set. For
865 that special case such an export sets the 'exp_keep_sync', which then
866 prevents the 'exp_need_sync' flag from ever being cleared. This will
867 no longer be needed in DNE Phase II.
869 The 'exp_flvr_changed' and 'exp_flvr_adapt' flags along with
870 'exp_sp_peer', 'exp_flvr', 'exp_flvr_old', and 'exp_flvr_expire'
871 fields are all used to manage the security settings for the
872 connection. Security is discussed in the <<security>> section. (fixme:
875 The 'exp_libclient' flag indicates that the export is for a client
876 based on "liblustre". This allows for simplified handling on the
877 server. (fixme: how is processing simplified? It sounds like I may
878 need a whole special section on liblustre.)
880 The 'exp_need_mne_swab' flag indicates the presence of an old bug that
881 affected one special case of failed swabbing. It is not part of
884 As RPCs arrive they are first subjected to triage. Each request is
885 placed on the 'exp_hp_rpcs' list and examined to see if it is high
886 priority (PING, truncate, bulk I/O). If it is not high priority then
887 it is moved to the 'exp_reg_prcs' list. The 'exp_rpc_lock' protects
888 both lists from concurrent access.
890 All arriving LDLM requests get put on the 'exp_bl_list' and access to
891 that list is controlled via the 'exp_bl_list_lock'.
893 The union provides for target specific data. The 'eu_target_data' is
894 for a common core of fields for a generic target. The others are
895 specific to particular target types: 'eu_mdt_data' for MDTs,
896 'eu_filter_data' for OSTs, 'eu_ec_data' for an "echo client" (fixme:
897 describe what an echo client is somewhere), and 'eu_mgs_data' is for
900 The 'exp_bl_lock_at' field supports adaptive timeouts which will be
901 discussed separately. (fixme: so discuss it somewhere.)
906 Each export maintains a connection count.