1 Connections Between Lustre Entities
2 -----------------------------------
5 The Lustre protocol is connection-based in that each two entities
6 maintain shared, coordinated state information. The most common
7 example of two such entities are a client and a target on some
8 server. The target is identified by name to the client through an
9 interaction with the management server. The client then 'connects' to
10 the given target on the indicated server by sending the appropriate
11 version of the *_CONNECT message (MGS_CONNECT, MDS_CONNECT, or
12 OST_CONNECT - colectively *_CONNECT) and receiving back the
13 corresponding *_CONNECT reply. The server creates an 'export' for the
14 connection between the target and the client, and the export holds the
15 server state information for that connection. When the client gets the
16 reply it creates an 'import', and the import holds the client state
17 information for that connection. Note that if a server has N targets
18 and M clients have connected to them, the server will have N x M
19 exports and each client will have N imports.
21 There are also connections between the servers: Each MDS and OSS has a
22 connection to the MGS, where the MDS (respectively the OSS) plays the
23 role of the client in the above discussion. That is, the MDS initiates
24 the connection and has an import for the MGS, while the MGS has an
25 export for each MDS. Each MDS connects to each OST, with an import on
26 the MDS and an export on the OSS. This connection supports requests
27 from the MDS to the OST for 'statfs' information such as size and
28 access time values. Each OSS also connects to the first MDS to get
29 access to auxiliary services, with an import on the OSS and an export
30 on the first MDS. The auxiliary services are: the File ID Location
31 Database (FLDB), the quota master service, and the sequence
34 Finally, for some communications the roles of message initiation and
35 message reply are reversed. This is the case, for instance, with
36 call-back operations. In that case the entity which would normally
37 have an import has, instead, a 'reverse-export' and the
38 other end of the connection maintains a 'reverse-import'. The
39 reverse-import uses the same structure as a regular import, and the
40 reverse-export uses the same structure as a regular export.
48 An 'obd_connect_data' structure accompanies every connect operation in
49 both the request message and in the reply message.
52 struct obd_connect_data {
53 __u64 ocd_connect_flags;
54 __u32 ocd_version; /* OBD_CONNECT_VERSION */
55 __u32 ocd_grant; /* OBD_CONNECT_GRANT */
56 __u32 ocd_index; /* OBD_CONNECT_INDEX */
57 __u32 ocd_brw_size; /* OBD_CONNECT_BRW_SIZE */
58 __u64 ocd_ibits_known; /* OBD_CONNECT_IBITS */
59 __u8 ocd_blocksize; /* OBD_CONNECT_GRANT_PARAM */
60 __u8 ocd_inodespace; /* OBD_CONNECT_GRANT_PARAM */
61 __u16 ocd_grant_extent; /* OBD_CONNECT_GRANT_PARAM */
63 __u64 ocd_transno; /* OBD_CONNECT_TRANSNO */
64 __u32 ocd_group; /* OBD_CONNECT_MDS */
65 __u32 ocd_cksum_types; /* OBD_CONNECT_CKSUM */
66 __u32 ocd_max_easize; /* OBD_CONNECT_MAX_EASIZE */
68 __u64 ocd_maxbytes; /* OBD_CONNECT_MAXBYTES */
87 The 'ocd_connect_flags' field encodes the connect flags giving the
88 capabilities of a connection between client and target. Several of
89 those flags (noted in comments above and the discussion below)
90 actually control whether the remaining fields of 'obd_connect_data'
91 get used. The [[connect-flags]] flags are:
94 #define OBD_CONNECT_RDONLY 0x1ULL /*client has read-only access*/
95 #define OBD_CONNECT_INDEX 0x2ULL /*connect specific LOV idx */
96 #define OBD_CONNECT_MDS 0x4ULL /*connect from MDT to OST */
97 #define OBD_CONNECT_GRANT 0x8ULL /*OSC gets grant at connect */
98 #define OBD_CONNECT_SRVLOCK 0x10ULL /*server takes locks for cli */
99 #define OBD_CONNECT_VERSION 0x20ULL /*Lustre versions in ocd */
100 #define OBD_CONNECT_REQPORTAL 0x40ULL /*Separate non-IO req portal */
101 #define OBD_CONNECT_ACL 0x80ULL /*access control lists */
102 #define OBD_CONNECT_XATTR 0x100ULL /*client use extended attr */
103 #define OBD_CONNECT_CROW 0x200ULL /*MDS+OST create obj on write*/
104 #define OBD_CONNECT_TRUNCLOCK 0x400ULL /*locks on server for punch */
105 #define OBD_CONNECT_TRANSNO 0x800ULL /*replay sends init transno */
106 #define OBD_CONNECT_IBITS 0x1000ULL /*support for inodebits locks*/
107 #define OBD_CONNECT_JOIN 0x2000ULL /*files can be concatenated.
108 *We do not support JOIN FILE
109 *anymore, reserve this flags
110 *just for preventing such bit
112 #define OBD_CONNECT_ATTRFID 0x4000ULL /*Server can GetAttr By Fid*/
113 #define OBD_CONNECT_NODEVOH 0x8000ULL /*No open hndl on specl nodes*/
114 #define OBD_CONNECT_RMT_CLIENT 0x10000ULL /*Remote client */
115 #define OBD_CONNECT_RMT_CLIENT_FORCE 0x20000ULL /*Remote client by force */
116 #define OBD_CONNECT_BRW_SIZE 0x40000ULL /*Max bytes per rpc */
117 #define OBD_CONNECT_QUOTA64 0x80000ULL /*Not used since 2.4 */
118 #define OBD_CONNECT_MDS_CAPA 0x100000ULL /*MDS capability */
119 #define OBD_CONNECT_OSS_CAPA 0x200000ULL /*OSS capability */
120 #define OBD_CONNECT_CANCELSET 0x400000ULL /*Early batched cancels. */
121 #define OBD_CONNECT_SOM 0x800000ULL /*Size on MDS */
122 #define OBD_CONNECT_AT 0x1000000ULL /*client uses AT */
123 #define OBD_CONNECT_LRU_RESIZE 0x2000000ULL /*LRU resize feature. */
124 #define OBD_CONNECT_MDS_MDS 0x4000000ULL /*MDS-MDS connection */
125 #define OBD_CONNECT_REAL 0x8000000ULL /*real connection */
126 #define OBD_CONNECT_CHANGE_QS 0x10000000ULL /*Not used since 2.4 */
127 #define OBD_CONNECT_CKSUM 0x20000000ULL /*support several cksum algos*/
128 #define OBD_CONNECT_FID 0x40000000ULL /*FID is supported by server */
129 #define OBD_CONNECT_VBR 0x80000000ULL /*version based recovery */
130 #define OBD_CONNECT_LOV_V3 0x100000000ULL /*client supports LOV v3 EA */
131 #define OBD_CONNECT_GRANT_SHRINK 0x200000000ULL /* support grant shrink */
132 #define OBD_CONNECT_SKIP_ORPHAN 0x400000000ULL /* don't reuse orphan objids */
133 #define OBD_CONNECT_MAX_EASIZE 0x800000000ULL /* preserved for large EA */
134 #define OBD_CONNECT_FULL20 0x1000000000ULL /* it is 2.0 client */
135 #define OBD_CONNECT_LAYOUTLOCK 0x2000000000ULL /* client uses layout lock */
136 #define OBD_CONNECT_64BITHASH 0x4000000000ULL /* client supports 64-bits
138 #define OBD_CONNECT_MAXBYTES 0x8000000000ULL /* max stripe size */
139 #define OBD_CONNECT_IMP_RECOV 0x10000000000ULL /* imp recovery support */
140 #define OBD_CONNECT_JOBSTATS 0x20000000000ULL /* jobid in ptlrpc_body */
141 #define OBD_CONNECT_UMASK 0x40000000000ULL /* create uses client umask */
142 #define OBD_CONNECT_EINPROGRESS 0x80000000000ULL /* client handles -EINPROGRESS
143 * RPC error properly */
144 #define OBD_CONNECT_GRANT_PARAM 0x100000000000ULL/* extra grant params used for
145 * finer space reservation */
146 #define OBD_CONNECT_FLOCK_OWNER 0x200000000000ULL /* for the fixed 1.8
147 * policy and 2.x server */
148 #define OBD_CONNECT_LVB_TYPE 0x400000000000ULL /* variable type of LVB */
149 #define OBD_CONNECT_NANOSEC_TIME 0x800000000000ULL /* nanosecond timestamps */
150 #define OBD_CONNECT_LIGHTWEIGHT 0x1000000000000ULL/* lightweight connection */
151 #define OBD_CONNECT_SHORTIO 0x2000000000000ULL/* short io */
152 #define OBD_CONNECT_PINGLESS 0x4000000000000ULL/* pings not required */
153 #define OBD_CONNECT_FLOCK_DEAD 0x8000000000000ULL/* deadlock detection */
154 #define OBD_CONNECT_DISP_STRIPE 0x10000000000000ULL/* create stripe disposition*/
155 #define OBD_CONNECT_OPEN_BY_FID 0x20000000000000ULL /* open by fid won't pack
159 Each flag corresponds to a particular capability that the client and
160 target together will honor. A client will send a message including
161 some subset of these capabilities during a connection request to a
162 specific target. It tells the server what capabilities it has. The
163 server then replies with the subset of those capabilities it agrees to
164 honor (for the given target).
166 If the OBD_CONNECT_VERSION flag is set then the 'ocd_version' field is
167 honored. The 'ocd_version' gives an encoding of the Lustre
168 version. For example, Version 2.7.32 would be hexadecimal number
171 If the OBD_CONNECT_GRANT flag is set then the 'ocd_grant' field is
172 honored. The 'ocd_grant' value in a reply (to a connection request)
173 sets the client's grant.
175 If the OBD_CONNECT_INDEX flag is set then the 'ocd_index' field is
176 honored. The 'ocd_index' value is set in a reply to a connection
177 request. It holds the LOV index of the target.
179 If the OBD_CONNECT_BRW_SIZE flag is set then the 'ocd_brw_size' field
180 is honored. The 'ocd_brw_size' value sets the size of the maximum
181 supported RPC. The client proposes a value in its connection request,
182 and the server's reply will either agree or further limit the size.
184 If the OBD_CONNECT_IBITS flag is set then the 'ocd_ibits_known' field
185 is honored. The 'ocd_ibits_known' value determines the handling of
186 locks on inodes. See the discussion of inodes and extended attributes.
188 If the OBD_CONNECT_GRANT_PARAM flag is set then the 'ocd_blocksize',
189 'ocd_inodespace', and 'ocd_grant_extent' fields are honored. A server
190 reply uses the 'ocd_blocksize' value to inform the client of the log
191 base two of the size in bytes of the backend file system's blocks.
193 A server reply uses the 'ocd_inodespace' value to inform the client of
194 the log base two of the size of an inode.
196 Under some circumstances (for example when ZFS is the back end file
197 system) there may be additional overhead in handling writes for each
198 extent. The server uses the 'ocd_grant_extent' value to inform the
199 client of the size in bytes consumed from its grant on the server when
200 creating a new file. The client uses this value in calculating how
201 much dirty write cache it has and whether it has reached the limit
202 established by the target's grant.
204 If the OBD_CONNECT_TRANSNO flag is set then the 'ocd_transno' field is
205 honored. A server uses the 'ocd_transno' value during recovery to
206 inform the client of the transaction number at which it should begin
209 If the OBD_CONNECT_MDS flag is set then the 'ocd_group' field is
210 honored. When an MDT connects to an OST the 'ocd_group' field informs
211 the OSS of the MDT's index. Objects on that OST for that MDT will be
212 in a common namespace served by that MDT.
214 If the OBD_CONNECT_CKSUM flag is set then the 'ocd_cksum_types' field
215 is honored. The client uses the 'ocd_checksum_types' field to propose
216 to the server the client's available (presumably hardware assisted)
217 checksum mechanisms. The server replies with the checksum types it has
218 available. Finally, the client will employ the fastest of the agreed
221 If the OBD_CONNECT_MAX_EASIZE flag is set then the 'ocd_max_easize'
222 field is honored. The server uses 'ocd_max_easize' to inform the
223 client about the amount of space that can be allocated in each inode
224 for extended attributes. The 'ocd_max_easize' specifically refers to
225 the space used for striping information. This allows the client to
226 determine the maximum layout size (and hence stripe count) that can be
229 The 'ocd_instance' field (alone) is not governed by an OBD_CONNECT_*
230 flag. The MGS uses the 'ocd_instance' value in its reply to a
231 connection request to inform the server and target of the "era" of its
232 connection. The MGS initializes the era value for each server to zero
233 and increments that value every time the target connects. This
234 supports imperative recovery.
236 If the OBD_CONNECT_MAXBYTES flag is set then the 'ocd_maxbytes' field
237 is honored. An OSS uses the 'ocd_maxbytes' value to inform the client
238 of the maximum OST object size for this target. A stripe on any OST
239 for a multi-striped file cannot be larger than the minimum maxbytes
242 The additional space in the 'obd_connect_data' structure is unused and
243 reserved for future use.
245 fixme: Discuss the meaning of the rest of the OBD_CONNECT_* flags.
251 #define IMP_STATE_HIST_LEN 16
252 struct import_state_hist {
253 enum lustre_imp_state ish_state;
257 struct portals_handle imp_handle;
258 atomic_t imp_refcount;
259 struct lustre_handle imp_dlm_handle;
260 struct ptlrpc_connection *imp_connection;
261 struct ptlrpc_client *imp_client;
262 cfs_list_t imp_pinger_chain;
263 cfs_list_t imp_zombie_chain;
264 cfs_list_t imp_replay_list;
265 cfs_list_t imp_sending_list;
266 cfs_list_t imp_delayed_list;
267 cfs_list_t imp_committed_list;
268 cfs_list_t *imp_replay_cursor;
269 struct obd_device *imp_obd;
270 struct ptlrpc_sec *imp_sec;
271 struct mutex imp_sec_mutex;
272 cfs_time_t imp_sec_expire;
273 wait_queue_head_t imp_recovery_waitq;
274 atomic_t imp_inflight;
275 atomic_t imp_unregistering;
276 atomic_t imp_replay_inflight;
277 atomic_t imp_inval_count;
278 atomic_t imp_timeouts;
279 enum lustre_imp_state imp_state;
280 struct import_state_hist imp_state_hist[IMP_STATE_HIST_LEN];
281 int imp_state_hist_idx;
284 int imp_last_generation_checked;
285 __u64 imp_last_replay_transno;
286 __u64 imp_peer_committed_transno;
287 __u64 imp_last_transno_checked;
288 struct lustre_handle imp_remote_handle;
289 cfs_time_t imp_next_ping;
290 __u64 imp_last_success_conn;
291 cfs_list_t imp_conn_list;
292 struct obd_import_conn *imp_conn_current;
301 imp_server_timeout:1,
302 imp_delayed_recovery:1,
303 imp_no_lock_replay:1,
306 imp_force_next_verify:1,
309 imp_no_pinger_recover:1,
311 imp_force_reconnect:1,
313 __u32 imp_connect_op;
314 struct obd_connect_data imp_connect_data;
315 __u64 imp_connect_flags_orig;
316 int imp_connect_error;
318 __u32 imp_msghdr_flags; /* adjusted based on server capability */
319 struct ptlrpc_request_pool *imp_rq_pool; /* emergency request pool */
320 struct imp_at imp_at; /* adaptive timeout data */
321 time_t imp_last_reply_time; /* for health check */
325 The 'imp_handle' value is the unique id for the import, and is used as
326 a hash key to gain access to it. It is not used in any of the Lustre
327 protocol messages, but rather is just for internal reference.
329 The 'imp_refcount' is also for internal use. The value is incremented
330 with each RPC created, and decremented as the request is freed. When
331 the reference count is zero the import can be freed, as when the
332 target is being disconnected.
334 The 'imp_dlm_handle' is a reference to the LDLM export for this
337 There can be multiple paths through the network to a given
338 target, in which case there would be multiple 'obd_import_conn' items
339 on the 'imp_conn_list'. Each 'obd_imp_conn' includes a
340 'ptlrpc_connection', so 'imp_connection' points to the one that is
343 The 'imp_client' identifies the (local) portals for sending and
344 receiving messages as well as the client's name. The information is
345 specific to either an MDC or an OSC.
347 The 'imp_ping_chain' places the import on a linked list of imports
348 that need periodic pings.
350 The 'imp_zombie_chain' places the import on a list ready for being
351 freed. Unused imports ('imp_refcount' is zero) are deleted
352 asynchronously by a garbage collecting process.
354 In order to support recovery the client must keep requests that are in
355 the process of being handled by the target. The target replies to a
356 request as soon as the target has made its local update to
357 memory. When the client receives that reply the request is put on the
358 'imp_replay_list'. In the event of a failure (target crash, lost
359 message) this list is then replayed for the target during the recovery
360 process. When a request has been sent but has not yet received a reply
361 it is placed on the 'imp_sending_list'. In the event of a failure
362 those will simply be replayed after any recovery has been
363 completed. Finally, there may be requests that the client is delaying
364 before it sends them. This can happen if the client is in a degraded
365 mode, as when it is in recovery after a failure. These requests are
366 put on the 'imp_delayed_list' and not processed until recovery is
367 complete and the 'imp_sending_list' has been replayed.
369 In order to support recovery 'open' requests must be preserved even
370 after they have completed. Those requests are placed on the
371 'imp_committed_list' and the 'imp_replay_cursor' allows for
372 accelerated access to those items.
374 The 'imp_obd' is a reference to the details about the target device
375 that is the subject of this import. There is a lot of state info in
376 there along with many implementation details that are not relevant to
377 the actual Lustre protocol. fixme: I'll want to go through all of the
378 fields in that structure to see which, if any need more
381 The security policy and settings are kept in 'imp_sec', and
382 'imp_sec_mutex' helps manage access to that info. The 'imp_sec_expire'
383 setting is in support of security policies that have an expiration
386 Some processes may need the import to be in a fully connected state in
387 order to proceed. The 'imp_recovery_waitq' is where those threads will
388 wait during recovery.
390 The 'imp_inflight' field counts the number of in-flight requests. It
391 is incremented with each request sent and decremented with each reply
394 The client reserves buffers for the processing of requests and
395 replies, and then informs LNet about those buffers. Buffers may get
396 reused during subsequent processing, but then a point may come when
397 the buffer is no longer going to be used. The client increments the
398 'imp_unregistering' counter and informs LNet the buffer is no longer
399 needed. When LNet has freed the buffer it will notify the client and
400 then the 'imp_unregistering' can be decremented again.
402 During recovery the 'imp_reply_inflight' counts the number of requests
403 from the reply list that have been sent and have not been replied to.
405 The 'imp_inval_count' field counts how many threads are in the process
406 of cleaning up this connection or waiting for cleanup to complete. The
407 cleanup itself may be needed in the case there is an eviction or other
408 problem (fixme what other problem?). The cleanup may involve freeing
409 allocated resources, updating internal state, running replay lists,
410 and invalidating cache. Since it could take a while there may end up
411 multiple threads waiting on this process to complete.
413 The 'imp_timeout' field is a counter that is incremented every time
414 there is a timeout in communication with the target.
416 The 'imp_state' tracks the state of the import. It draws from the
417 enumerated set of values:
419 .enum_lustre_imp_state
423 | LUSTRE_IMP_CLOSED | 1
425 | LUSTRE_IMP_DISCON | 3
426 | LUSTRE_IMP_CONNECTING | 4
427 | LUSTRE_IMP_REPLAY | 5
428 | LUSTRE_IMP_REPLAY_LOCKS | 6
429 | LUSTRE_IMP_REPLAY_WAIT | 7
430 | LUSTRE_IMP_RECOVER | 8
431 | LUSTRE_IMP_FULL | 9
432 | LUSTRE_IMP_EVICTED | 10
434 fixme: what are the transitions between these states? The
435 'imp_state_hist' array maintains a list of the last 16
436 (IMP_STATE_HIST_LEN) states the import was in, along with the time it
437 entered each (fixme: or is it when it left that state?). The list is
438 maintained in a circular manner, so the 'imp_state_hist_idx' points to
439 the entry in the list for the most recently visited state.
441 The 'imp_generation' and 'imp_conn_cnt' fields are monotonically
442 increasing counters. Every time a connection request is sent to the
443 target the 'imp_conn_cnt' counter is incremented, and every time a
444 reply is received for the connection request the 'imp_generation'
445 counter is incremented.
447 The 'imp_last_generation_checked' implements an optimization. When a
448 replay process has successfully traversed the reply list the
449 'imp_generation' value is noted here. If the generation has not
450 incremented then the replay list does not need to be traversed again.
452 During replay the 'imp_last_replay_transno' is set to the transaction
453 number of the last request being replayed, and
454 'imp_peer_committed_transno is set to the 'pb_last_committed' value
455 (of the 'ptlrpc_body') from replies if that value is higher than the
456 previous 'imp_peer_committed_transno'. The 'imp_last_transno_checked'
457 field implements an optimization. It is set to the
458 'imp_last_replay_transno' as its replay is initiated. If
459 'imp_last_transno_checked' is still 'imp_last_replay_transno' and
460 'imp_generation' is still 'imp_last_generation_checked' then there
461 are no additional requests ready to be removed from the replay
462 list. Furthermore, 'imp_last_transno_checked' may no longer be needed,
463 since the committed transactions are now maintained on a separate list.
465 The 'imp_remote_handle' is the handle sent by the target in a
466 connection reply message to uniquely identify the export for this
467 target and client that is maintained on the server. This is the handle
468 used in all subsequent messages to the target.
470 There are two separate ping intervals (fixme: what are the
471 values?). If there are no uncommitted messages for the target then the
472 default ping interval is used to set the 'imp_next_ping' to the time
473 the next ping needs to be sent. If there are uncommitted requests then
474 a "short interval" is used to set the time for the next ping.
476 The 'imp_last_success_conn' value is set to the time of the last
477 successful connection. fixme: The source says it is in 64 bit
478 jiffies, but does not further indicate how that value is calculated.
480 Since there can actually be multiple connection paths for a target
481 (due to failover or multihomed configurations) the import maintains a
482 list of all the possible connection paths in the list pointed to by
483 the 'imp_conn_list' field. The 'imp_conn_current' points to the one
484 currently in use. Compare with the 'imp_connection' fields. They point
485 to different structures, but each is reachable from the other.
487 Most of the flag, state, and list information in the import needs to
488 be accessed atomically. The 'imp_lock' is used to maintain the
489 consistency of the import while it is manipulated by multiple threads.
491 The various flags are documented in the source code and are largely
492 obvious from those short comments, reproduced here:
498 | imp_no_timeout | timeouts are disabled
499 | imp_invalid | client has been evicted
500 | imp_deactive | client administratively disabled
501 | imp_replayable | try to recover the import
502 | imp_dlm_fake | don't run recovery (timeout instead)
503 | imp_server_timeout | use 1/2 timeout on MDSs and OSCs
504 | imp_delayed_recovery | VBR: imp in delayed recovery
505 | imp_no_lock_replay | VBR: if gap was found then no lock replays
506 | imp_vbr_failed | recovery by versions was failed
507 | imp_force_verify | force an immidiate ping
508 | imp_force_next_verify | force a scheduled ping
509 | imp_pingable | target is pingable
510 | imp_resend_replay | resend for replay
511 | imp_no_pinger_recover | disable normal recovery, for test only.
512 | imp_need_mne_swab | need IR MNE swab
513 | imp_force_reconnect | import must be reconnected, not new connection
514 | imp_connect_tried | import has tried to connect with server
516 A few additional notes are in order. The 'imp_dlm_fake' flag signifies
517 that this is not a "real" import, but rather it is a "reverse"import
518 in support of the LDLM. When the LDLM invokes callback operations the
519 messages are initiated at the other end, so there need to a fake
520 import to receive the replies from the operation. Prior to the
521 introduction of adaptive timeouts the servers were given fixed timeout
522 value that were half those used for the clients. The
523 'imp_server_timeout' flag indicated that the import should use the
524 half-sized timeouts, but with the introduction of adaptive timeouts
525 this facility is no longer used. "VBR" is "version based recovery",
526 and it introduces a new possibility for handling requests. Previously,
527 f there were a gap in the transaction number sequence the the requests
528 associated with the missing transaction numbers would be
529 discarded. With VBR those transaction only need to be discarded if
530 there is an actual dependency between the ones that were skipped and
531 the currently latest committed transaction number. fixme: What are the
532 circumstances that would lead to setting the 'imp_force_next_verify'
533 or 'imp_pingable' flags? During recovery, the client sets the
534 'imp_no_pinger_recover' flag, which tells the process to proceed from
535 the current value of 'imp_replay_last_transno'. The
536 'imp_need_mne_swab' flag indicates a version dependent circumstance
537 where swabbing was inadvertently left out of one processing step.
543 An 'obd_export' structure for a given target is created on a server
544 for each client that connects to that target. The exports for all the
545 clients for a given target are managed together. The export represents
546 the connection state between the client and target as well as the
547 current state of any ongoing activity. Thus each pending request will
548 have a reference to the export. The export is discarded if the
549 connection goes away, but only after all the references to it have
550 been cleaned up. The state information for each export is also
551 maintained on disk. In the event of a server failure, that or another
552 server can read the export date from disk to enable recovery.
556 struct portals_handle exp_handle;
557 atomic_t exp_refcount;
558 atomic_t exp_rpc_count;
559 atomic_t exp_cb_count;
560 atomic_t exp_replay_count;
561 atomic_t exp_locks_count;
562 #if LUSTRE_TRACKS_LOCK_EXP_REFS
563 cfs_list_t exp_locks_list;
564 spinlock_t exp_locks_list_guard;
566 struct obd_uuid exp_client_uuid;
567 cfs_list_t exp_obd_chain;
568 cfs_hlist_node_t exp_uuid_hash;
569 cfs_hlist_node_t exp_nid_hash;
570 cfs_list_t exp_obd_chain_timed;
571 struct obd_device *exp_obd;
572 struct obd_import *exp_imp_reverse;
573 struct nid_stat *exp_nid_stats;
574 struct ptlrpc_connection *exp_connection;
576 cfs_hash_t *exp_lock_hash;
577 cfs_hash_t *exp_flock_hash;
578 cfs_list_t exp_outstanding_replies;
579 cfs_list_t exp_uncommitted_replies;
580 spinlock_t exp_uncommitted_replies_lock;
581 __u64 exp_last_committed;
582 cfs_time_t exp_last_request_time;
583 cfs_list_t exp_req_replay_queue;
585 struct obd_connect_data exp_connect_data;
586 enum obd_option exp_flags;
594 exp_req_replay_needed:1,
595 exp_lock_replay_needed:1,
601 enum lustre_sec_part exp_sp_peer;
602 struct sptlrpc_flavor exp_flvr;
603 struct sptlrpc_flavor exp_flvr_old[2];
604 cfs_time_t exp_flvr_expire[2];
605 spinlock_t exp_rpc_lock;
606 cfs_list_t exp_hp_rpcs;
607 cfs_list_t exp_reg_rpcs;
608 cfs_list_t exp_bl_list;
609 spinlock_t exp_bl_list_lock;
611 struct tg_export_data eu_target_data;
612 struct mdt_export_data eu_mdt_data;
613 struct filter_export_data eu_filter_data;
614 struct ec_export_data eu_ec_data;
615 struct mgs_export_data eu_mgs_data;
617 struct nodemap *exp_nodemap;
621 The 'exp_handle' is a little extra information as compared with a
622 'struct lustre_handle', which is just the cookie. The cookie that the
623 server generates to uniquely identify this connection gets put into
624 this structure along with their information about the device in
625 question. This is the cookie the *_CONNECT reply sends back to the
626 client and is then stored int he client's import.
628 The 'exp_refcount' gets incremented whenever some aspect of the export
629 is "in use". The arrival of an otherwise unprocessed message for this
630 target will increment the refcount. A reference by an LDLM lock that
631 gets taken will increment the refcount. Callback invocations and
632 replay also lead to incrementing the ref_count. The next for fields -
633 'exp_rpc_count', exp_cb_count', and 'exp_replay_count', and
634 'exp_locks_count' - all subcategorize the 'exp_refcount' for debug
635 purposes. Similarly, the 'exp_locks_list' and 'exp_locks_list_guard'
636 are further debug info that lists the actual locks accounted in
639 The 'exp_client_uuid' gives the UUID of the client connected to this
640 export. Fixme: when and how does the UUID get generated?
642 The server maintains all the exports for a given target on a circular
643 list. Each export's place on that list is maintained in the
644 'exp_obd_chain'. A common activity is to look up the export based on
645 the UUID or the nid of the client, and the 'exp_uuid_hash' and
646 'exp_nid_hash' fields maintain this export's place in hashes
647 constructed for that purpose.
649 Exports are also maintained on a list sorted by the last time the
650 corresponding client was heard from. The 'exp_obd_chain_timed' field
651 maintains the export's place on that list. When a message arrives from
652 the client the time is "now" so the export gets put at the end of the
653 list. Since it is circular, the next export is then the oldest. If it
654 has not been heard of within its timeout interval that export is
655 marked for later eviction.
657 The 'exp_obd' points to the 'obd_device' structure for the device that
658 is the target of this export.
660 In the event of a call-back the export needs to have a the ability to
661 initiate messages back to the client. The 'exp_imp_reverse' provides a
662 "reverse" import that manages this capability.
664 The '/proc' stats for the export (and the target) get updated via the
667 The 'exp_connection' points to the connection information for this
668 export. This is the information about the actual networking pathway(s)
669 that get used for communication.
672 The 'exp_conn_cnt' notes the connection count value from the client at
673 the time of the connection. In the event that more than one connection
674 request is issued before the connection is established then the
675 'exp_conn_cnt' will list the highest value. If a previous connection
676 attempt (with a lower value) arrives later it may be safely
677 discarded. Every request lists its connection count, so non-connection
678 requests with lower connection count values can also be discarded.
679 Note that this does not count how many times the client has connected
680 to the target. If a client is evicted the export is deleted once it
681 has been cleaned up and its 'exp_ref_count' reduced to zero. A new
682 connection from the client will get a new export.
684 The 'exp_lock_hash' provides access to the locks granted to the
685 corresponding client for this target. If a lock cannot be granted it
686 is discarded. A file system lock ("flock") is also implemented through
687 the LDLM lock system, but not all LDLM locks are flocks. The ones that
688 are flocks are gathered in a hash 'exp_flock_hash'. This supports
691 For those requests that initiate file system modifying transactions
692 the request and its attendant locks need to be preserved until either
693 a) the client acknowleges recieving the reply, or b) the transaction
694 has been committed locally. This ensures a request can be replayed in
695 the event of a failure. The reply is kept on the
696 'exp_outstanding_replies' list until the LNet layer notifies the
697 server that the reply has been acknowledged. A reply is kept on the
698 'exp_uncommitted_replies' list until the transaction (if any) has been
701 The 'exp_last_committed' value keeps the transaction number of the
702 last committed transaction. Every reply to a client includes this
703 value as a means of early-as-possible notification of transactions that
706 The 'exp_last_request_time' is self explanatory.
708 During reply a request that is waiting for reply is maintained on the
709 list 'exp_req_replay_queue'.
711 The 'exp_lock' spin-lock is used for access control to the exports
712 flags, as well as the 'exp_outstanding_replies' list and the revers
715 The 'exp_connect_data' refers to an 'obd_connect_data' structure for
716 the connection established between this target and the client this
717 export refers to. See also the corresponding entry in the import and
718 in the connect messages passed between the hosts.
720 The 'exp_flags' field encodes three directives as follows:
723 OBD_OPT_FORCE = 0x0001,
724 OBD_OPT_FAILOVER = 0x0002,
725 OBD_OPT_ABORT_RECOV = 0x0004,
728 fixme: Are the set for some exports and a condition of their
729 existence? or do they reflect a transient state the export is passing
732 The 'exp_failed' flag gets set whenever the target has failed for any
733 reason or the export is otherwise due to be cleaned up. Once set it
734 will not be unset in this export. Any subsequent connection between
735 the client and the target would be governed by a new export.
737 After a failure export data is retrieved from disk and the exports
738 recreated. Exports created in this way will have their
739 'exp_in_recovery' flag set. Once any outstanding requests and locks
740 have been recovered for the client, then the export is recovered and
741 'exp_in_recovery' can be cleared. When all the client exports for a
742 given target have been recovered then the target is considered
743 recovered, and when all targets have been recovered the server is
744 considered recovered.
746 A *_DISCONNECT message from the client will set the 'exp_disconnected'
747 flag, as will any sort of failure of the target. Once set the export
748 will be cleaned up and deleted.
750 When a *_CONNECT message arrives the 'exp_connecting' flag is set. If
751 for some reason a second *_CONNECT request arrives from the client it can
752 be discarded when this flag is set.
754 The 'exp_delayed' flag is no longer used. In older code it indicated
755 that recovery had not completed in a timely fashion, but that a tardy
756 recovery would still be possible, since there were no dependencies on
759 The 'exp_vbr_failed' flag indicates a failure during the recovery
760 process. See <<recovery>> for a more detailed discussion of recovery
761 and transaction replay. For a file system modifying request, the
762 server composes its reply including the 'pb_pre_versions' entries in
763 'ptlrpc_body', which indicate the most recent updates to the
764 object. The client updates the request wth teh 'pb_transno' and
765 'pb_pre_versions' from the reply, and keeps that request until the
766 target signals that the transaction has been committed to disk. If the
767 client times-out without that confirmation then it will 'replay' the
768 request, which now includes the 'pb_pre_versions' information. During
769 a replay the target checks that the object has not been further
770 modified beyond those 'pb_pre_versions'. If this check fails then the
771 request is out of date, and the recovery process fails for the
772 connection between this client and this target. At that point the
773 'exp_vbr_failed' flag is set to indicate version based recovery
774 failed. This will lead to the client being evicted and this export
775 being cleaned up and deleted.
777 At the start of recovery both the 'exp_req_replay_needed' and
778 'exp_lock_replay_needed' flags are set. As request replay is completed
779 the 'exp_req_replay_needed' flag is cleared. As lock replay is
780 completed the 'exp_lock_replay_needed' flag is cleared. Once both are
781 cleared the 'exp_in_recovery' flag can be cleared.
783 The 'exp_need_sync' supports an optimization. At mount time it is
784 likely that every client (potentially thousands) will create an export
785 and that export will need to be saved to disk synchronously. This can
786 lead to an unusually high and poorly performing interaction with the
787 disk. When the export is created the 'exp_need_sync' flag is set and
788 the actual writing to disk is delayed. As transactions arrive from
789 clients (in a much less coordinated fashion) the 'exp_need_sync' flag
790 indicates a need to save the export as well as the transaction. At
791 that point the flag is cleared (except see below).
793 In DNE (phase I) the export for an MDT managing the connection from
794 another MDT will want to always keep the 'exp_need_sync' flag set. For
795 that special case such an export sets the 'exp_keep_sync', which then
796 prevents the 'exp_need_sync' flag from ever being cleared. This will
797 no longer be needed in DNE Phase II.
799 The 'exp_flvr_changed' and 'exp_flvr_adapt' flags along with
800 'exp_sp_peer', 'exp_flvr', 'exp_flvr_old', and 'exp_flvr_expire'
801 fields are all used to manage the security settings for the
802 connection. Security is discussed in the <<security>> section. (fixme:
805 The 'exp_libclient' flag indicates that the export is for a client
806 based on "liblustre". This allows for simplified handling on the
807 server. (fixme: how is processing simplified? It sounds like I may
808 need a whole special section on liblustre.)
810 The 'exp_need_mne_swab' flag indicates the presence of an old bug that
811 affected one special case of failed swabbing. It is not part of
814 As RPCs arrive they are first subjected to triage. Each request is
815 placed on the 'exp_hp_rpcs' list and examined to see if it is high
816 priority (fixme: what constitutes high priority? PING, truncate, bulk
817 I/O, ... others?). If it is not high priority then it is moved to the
818 'exp_reg_prcs' list. The 'exp_rpc_lock' protects both lists from
821 All arriving LDLM requests get put on the 'exp_bl_list' and access to
822 that list is controlled via the 'exp_bl_list_lock'.
824 The union provides for target specific data. The 'eu_target_data' is
825 for a common core of fields for a generic target. The others are
826 specific to particular target types: 'eu_mdt_data' for MDTs,
827 'eu_filter_data' for OSTs, 'eu_ec_data' for an "echo client" (fixme:
828 describe what an echo client is somewhere), and 'eu_mgs_data' is for
831 The 'exp_bl_lock_at' field supports adaptive timeouts which will be
832 discussed separately. (fixme: so discuss it somewhere.)
837 Each export maintains a connection count. Or is it just the management