1 Connections Between Lustre Entities
2 -----------------------------------
5 The Lustre protocol is connection-based in that each two entities
6 maintain shared, coordinated state information. The most common
7 example of two such entities are a client and a target on some
8 server. The target is identified by name to the client through an
9 interaction with the management server. The client then 'connects' to
10 the given target on the indicated server by sending the appropriate
11 version of the *_CONNECT message (MGS_CONNECT, MDS_CONNECT, or
12 OST_CONNECT - colectively *_CONNECT) and receiving back the
13 corresponding *_CONNECT reply. The server creates an 'export' for the
14 connection between the target and the client, and the export holds the
15 server state information for that connection. When the client gets the
16 reply it creates an 'import', and the import holds the client state
17 information for that connection. Note that if a server has N targets
18 and M clients have connected to them, the server will have N x M
19 exports and each client will have N imports.
21 There are also connections between the servers: Each MDS and OSS has a
22 connection to the MGS, where the MDS (respectively the OSS) plays the
23 role of the client in the above discussion. That is, the MDS initiates
24 the connection and has an import for the MGS, while the MGS has an
25 export for each MDS. Each MDS connects to each OST, with an import on
26 the MDS and an export on the OSS. This connection supports requests
27 from the MDS to the OST to create and destroy data objects, to set
28 attributes (such as permission bits), and for 'statfs' information for
29 precreation needs. Each OSS also connects to the first MDS to get
30 access to auxiliary services, with an import on the OSS and an export
31 on the first MDS. The auxiliary services are: the File ID Location
32 Database (FLDB), the quota master service, and the sequence
33 controller. This connection for auxiliary services is a 'lightweight'
34 one in that it has no replay functionality and consumes no space on
35 the MDS for client data. Each MDS connects also to all other MDSs for
38 Finally, for some communications the roles of message initiation and
39 message reply are reversed. This is the case, for instance, with
40 call-back operations. In that case the entity which would normally
41 have an import has, instead, a 'reverse-export' and the
42 other end of the connection maintains a 'reverse-import'. The
43 reverse-import uses the same structure as a regular import, and the
44 reverse-export uses the same structure as a regular export.
52 An 'obd_connect_data' structure accompanies every connect operation in
53 both the request message and in the reply message.
56 struct obd_connect_data {
57 __u64 ocd_connect_flags;
58 __u32 ocd_version; /* OBD_CONNECT_VERSION */
59 __u32 ocd_grant; /* OBD_CONNECT_GRANT */
60 __u32 ocd_index; /* OBD_CONNECT_INDEX */
61 __u32 ocd_brw_size; /* OBD_CONNECT_BRW_SIZE */
62 __u64 ocd_ibits_known; /* OBD_CONNECT_IBITS */
63 __u8 ocd_blocksize; /* OBD_CONNECT_GRANT_PARAM */
64 __u8 ocd_inodespace; /* OBD_CONNECT_GRANT_PARAM */
65 __u16 ocd_grant_extent; /* OBD_CONNECT_GRANT_PARAM */
67 __u64 ocd_transno; /* OBD_CONNECT_TRANSNO */
68 __u32 ocd_group; /* OBD_CONNECT_MDS */
69 __u32 ocd_cksum_types; /* OBD_CONNECT_CKSUM */
70 __u32 ocd_max_easize; /* OBD_CONNECT_MAX_EASIZE */
72 __u64 ocd_maxbytes; /* OBD_CONNECT_MAXBYTES */
91 The 'ocd_connect_flags' field encodes the connect flags giving the
92 capabilities of a connection between client and target. Several of
93 those flags (noted in comments above and the discussion below)
94 actually control whether the remaining fields of 'obd_connect_data'
95 get used. The [[connect-flags]] flags are:
98 #define OBD_CONNECT_RDONLY 0x1ULL /*client has read-only access*/
99 #define OBD_CONNECT_INDEX 0x2ULL /*connect specific LOV idx */
100 #define OBD_CONNECT_MDS 0x4ULL /*connect from MDT to OST */
101 #define OBD_CONNECT_GRANT 0x8ULL /*OSC gets grant at connect */
102 #define OBD_CONNECT_SRVLOCK 0x10ULL /*server takes locks for cli */
103 #define OBD_CONNECT_VERSION 0x20ULL /*Lustre versions in ocd */
104 #define OBD_CONNECT_REQPORTAL 0x40ULL /*Separate non-IO req portal */
105 #define OBD_CONNECT_ACL 0x80ULL /*access control lists */
106 #define OBD_CONNECT_XATTR 0x100ULL /*client use extended attr */
107 #define OBD_CONNECT_CROW 0x200ULL /*MDS+OST create obj on write*/
108 #define OBD_CONNECT_TRUNCLOCK 0x400ULL /*locks on server for punch */
109 #define OBD_CONNECT_TRANSNO 0x800ULL /*replay sends init transno */
110 #define OBD_CONNECT_IBITS 0x1000ULL /*support for inodebits locks*/
111 #define OBD_CONNECT_JOIN 0x2000ULL /*files can be concatenated.
112 *We do not support JOIN FILE
113 *anymore, reserve this flags
114 *just for preventing such bit
116 #define OBD_CONNECT_ATTRFID 0x4000ULL /*Server can GetAttr By Fid*/
117 #define OBD_CONNECT_NODEVOH 0x8000ULL /*No open hndl on specl nodes*/
118 #define OBD_CONNECT_RMT_CLIENT 0x10000ULL /*Remote client */
119 #define OBD_CONNECT_RMT_CLIENT_FORCE 0x20000ULL /*Remote client by force */
120 #define OBD_CONNECT_BRW_SIZE 0x40000ULL /*Max bytes per rpc */
121 #define OBD_CONNECT_QUOTA64 0x80000ULL /*Not used since 2.4 */
122 #define OBD_CONNECT_MDS_CAPA 0x100000ULL /*MDS capability */
123 #define OBD_CONNECT_OSS_CAPA 0x200000ULL /*OSS capability */
124 #define OBD_CONNECT_CANCELSET 0x400000ULL /*Early batched cancels. */
125 #define OBD_CONNECT_SOM 0x800000ULL /*Size on MDS */
126 #define OBD_CONNECT_AT 0x1000000ULL /*client uses AT */
127 #define OBD_CONNECT_LRU_RESIZE 0x2000000ULL /*LRU resize feature. */
128 #define OBD_CONNECT_MDS_MDS 0x4000000ULL /*MDS-MDS connection */
129 #define OBD_CONNECT_REAL 0x8000000ULL /*real connection */
130 #define OBD_CONNECT_CHANGE_QS 0x10000000ULL /*Not used since 2.4 */
131 #define OBD_CONNECT_CKSUM 0x20000000ULL /*support several cksum algos*/
132 #define OBD_CONNECT_FID 0x40000000ULL /*FID is supported by server */
133 #define OBD_CONNECT_VBR 0x80000000ULL /*version based recovery */
134 #define OBD_CONNECT_LOV_V3 0x100000000ULL /*client supports LOV v3 EA */
135 #define OBD_CONNECT_GRANT_SHRINK 0x200000000ULL /* support grant shrink */
136 #define OBD_CONNECT_SKIP_ORPHAN 0x400000000ULL /* don't reuse orphan objids */
137 #define OBD_CONNECT_MAX_EASIZE 0x800000000ULL /* preserved for large EA */
138 #define OBD_CONNECT_FULL20 0x1000000000ULL /* it is 2.0 client */
139 #define OBD_CONNECT_LAYOUTLOCK 0x2000000000ULL /* client uses layout lock */
140 #define OBD_CONNECT_64BITHASH 0x4000000000ULL /* client supports 64-bits
142 #define OBD_CONNECT_MAXBYTES 0x8000000000ULL /* max stripe size */
143 #define OBD_CONNECT_IMP_RECOV 0x10000000000ULL /* imp recovery support */
144 #define OBD_CONNECT_JOBSTATS 0x20000000000ULL /* jobid in ptlrpc_body */
145 #define OBD_CONNECT_UMASK 0x40000000000ULL /* create uses client umask */
146 #define OBD_CONNECT_EINPROGRESS 0x80000000000ULL /* client handles -EINPROGRESS
147 * RPC error properly */
148 #define OBD_CONNECT_GRANT_PARAM 0x100000000000ULL/* extra grant params used for
149 * finer space reservation */
150 #define OBD_CONNECT_FLOCK_OWNER 0x200000000000ULL /* for the fixed 1.8
151 * policy and 2.x server */
152 #define OBD_CONNECT_LVB_TYPE 0x400000000000ULL /* variable type of LVB */
153 #define OBD_CONNECT_NANOSEC_TIME 0x800000000000ULL /* nanosecond timestamps */
154 #define OBD_CONNECT_LIGHTWEIGHT 0x1000000000000ULL/* lightweight connection */
155 #define OBD_CONNECT_SHORTIO 0x2000000000000ULL/* short io */
156 #define OBD_CONNECT_PINGLESS 0x4000000000000ULL/* pings not required */
157 #define OBD_CONNECT_FLOCK_DEAD 0x8000000000000ULL/* deadlock detection */
158 #define OBD_CONNECT_DISP_STRIPE 0x10000000000000ULL/* create stripe disposition*/
159 #define OBD_CONNECT_OPEN_BY_FID 0x20000000000000ULL /* open by fid won't pack
163 Each flag corresponds to a particular capability that the client and
164 target together will honor. A client will send a message including
165 some subset of these capabilities during a connection request to a
166 specific target. It tells the server what capabilities it has. The
167 server then replies with the subset of those capabilities it agrees to
168 honor (for the given target).
170 If the OBD_CONNECT_VERSION flag is set then the 'ocd_version' field is
171 honored. The 'ocd_version' gives an encoding of the Lustre
172 version. For example, Version 2.7.32 would be hexadecimal number
175 If the OBD_CONNECT_GRANT flag is set then the 'ocd_grant' field is
176 honored. The 'ocd_grant' value in a reply (to a connection request)
177 sets the client's grant.
179 If the OBD_CONNECT_INDEX flag is set then the 'ocd_index' field is
180 honored. The 'ocd_index' value is set in a reply to a connection
181 request. It holds the LOV index of the target.
183 If the OBD_CONNECT_BRW_SIZE flag is set then the 'ocd_brw_size' field
184 is honored. The 'ocd_brw_size' value sets the size of the maximum
185 supported RPC. The client proposes a value in its connection request,
186 and the server's reply will either agree or further limit the size.
188 If the OBD_CONNECT_IBITS flag is set then the 'ocd_ibits_known' field
189 is honored. The 'ocd_ibits_known' value determines the handling of
190 locks on inodes. See the discussion of inodes and extended attributes.
192 If the OBD_CONNECT_GRANT_PARAM flag is set then the 'ocd_blocksize',
193 'ocd_inodespace', and 'ocd_grant_extent' fields are honored. A server
194 reply uses the 'ocd_blocksize' value to inform the client of the log
195 base two of the size in bytes of the backend file system's blocks.
197 A server reply uses the 'ocd_inodespace' value to inform the client of
198 the log base two of the size of an inode.
200 Under some circumstances (for example when ZFS is the back end file
201 system) there may be additional overhead in handling writes for each
202 extent. The server uses the 'ocd_grant_extent' value to inform the
203 client of the size in bytes consumed from its grant on the server when
204 creating a new file. The client uses this value in calculating how
205 much dirty write cache it has and whether it has reached the limit
206 established by the target's grant.
208 If the OBD_CONNECT_TRANSNO flag is set then the 'ocd_transno' field is
209 honored. A server uses the 'ocd_transno' value during recovery to
210 inform the client of the transaction number at which it should begin
213 If the OBD_CONNECT_MDS flag is set then the 'ocd_group' field is
214 honored. When an MDT connects to an OST the 'ocd_group' field informs
215 the OSS of the MDT's index. Objects on that OST for that MDT will be
216 in a common namespace served by that MDT.
218 If the OBD_CONNECT_CKSUM flag is set then the 'ocd_cksum_types' field
219 is honored. The client uses the 'ocd_checksum_types' field to propose
220 to the server the client's available (presumably hardware assisted)
221 checksum mechanisms. The server replies with the checksum types it has
222 available. Finally, the client will employ the fastest of the agreed
225 If the OBD_CONNECT_MAX_EASIZE flag is set then the 'ocd_max_easize'
226 field is honored. The server uses 'ocd_max_easize' to inform the
227 client about the amount of space that can be allocated in each inode
228 for extended attributes. The 'ocd_max_easize' specifically refers to
229 the space used for striping information. This allows the client to
230 determine the maximum layout size (and hence stripe count) that can be
233 The 'ocd_instance' field (alone) is not governed by an OBD_CONNECT_*
234 flag. The MGS uses the 'ocd_instance' value in its reply to a
235 connection request to inform the server and target of the "era" of its
236 connection. The MGS initializes the era value for each server to zero
237 and increments that value every time the target connects. This
238 supports imperative recovery.
240 If the OBD_CONNECT_MAXBYTES flag is set then the 'ocd_maxbytes' field
241 is honored. An OSS uses the 'ocd_maxbytes' value to inform the client
242 of the maximum OST object size for this target. A stripe on any OST
243 for a multi-striped file cannot be larger than the minimum maxbytes
246 The additional space in the 'obd_connect_data' structure is unused and
247 reserved for future use.
249 fixme: Discuss the meaning of the rest of the OBD_CONNECT_* flags.
255 #define IMP_STATE_HIST_LEN 16
256 struct import_state_hist {
257 enum lustre_imp_state ish_state;
261 struct portals_handle imp_handle;
262 atomic_t imp_refcount;
263 struct lustre_handle imp_dlm_handle;
264 struct ptlrpc_connection *imp_connection;
265 struct ptlrpc_client *imp_client;
266 cfs_list_t imp_pinger_chain;
267 cfs_list_t imp_zombie_chain;
268 cfs_list_t imp_replay_list;
269 cfs_list_t imp_sending_list;
270 cfs_list_t imp_delayed_list;
271 cfs_list_t imp_committed_list;
272 cfs_list_t *imp_replay_cursor;
273 struct obd_device *imp_obd;
274 struct ptlrpc_sec *imp_sec;
275 struct mutex imp_sec_mutex;
276 cfs_time_t imp_sec_expire;
277 wait_queue_head_t imp_recovery_waitq;
278 atomic_t imp_inflight;
279 atomic_t imp_unregistering;
280 atomic_t imp_replay_inflight;
281 atomic_t imp_inval_count;
282 atomic_t imp_timeouts;
283 enum lustre_imp_state imp_state;
284 struct import_state_hist imp_state_hist[IMP_STATE_HIST_LEN];
285 int imp_state_hist_idx;
288 int imp_last_generation_checked;
289 __u64 imp_last_replay_transno;
290 __u64 imp_peer_committed_transno;
291 __u64 imp_last_transno_checked;
292 struct lustre_handle imp_remote_handle;
293 cfs_time_t imp_next_ping;
294 __u64 imp_last_success_conn;
295 cfs_list_t imp_conn_list;
296 struct obd_import_conn *imp_conn_current;
305 imp_server_timeout:1,
306 imp_delayed_recovery:1,
307 imp_no_lock_replay:1,
310 imp_force_next_verify:1,
313 imp_no_pinger_recover:1,
315 imp_force_reconnect:1,
317 __u32 imp_connect_op;
318 struct obd_connect_data imp_connect_data;
319 __u64 imp_connect_flags_orig;
320 int imp_connect_error;
322 __u32 imp_msghdr_flags; /* adjusted based on server capability */
323 struct ptlrpc_request_pool *imp_rq_pool; /* emergency request pool */
324 struct imp_at imp_at; /* adaptive timeout data */
325 time_t imp_last_reply_time; /* for health check */
329 The 'imp_handle' value is the unique id for the import, and is used as
330 a hash key to gain access to it. It is not used in any of the Lustre
331 protocol messages, but rather is just for internal reference.
333 The 'imp_refcount' is also for internal use. The value is incremented
334 with each RPC created, and decremented as the request is freed. When
335 the reference count is zero the import can be freed, as when the
336 target is being disconnected.
338 The 'imp_dlm_handle' is a reference to the LDLM export for this
341 There can be multiple paths through the network to a given
342 target, in which case there would be multiple 'obd_import_conn' items
343 on the 'imp_conn_list'. Each 'obd_imp_conn' includes a
344 'ptlrpc_connection', so 'imp_connection' points to the one that is
347 The 'imp_client' identifies the (local) portals for sending and
348 receiving messages as well as the client's name. The information is
349 specific to either an MDC or an OSC.
351 The 'imp_ping_chain' places the import on a linked list of imports
352 that need periodic pings.
354 The 'imp_zombie_chain' places the import on a list ready for being
355 freed. Unused imports ('imp_refcount' is zero) are deleted
356 asynchronously by a garbage collecting process.
358 In order to support recovery the client must keep requests that are in
359 the process of being handled by the target. The target replies to a
360 request as soon as the target has made its local update to
361 memory. When the client receives that reply the request is put on the
362 'imp_replay_list'. In the event of a failure (target crash, lost
363 message) this list is then replayed for the target during the recovery
364 process. When a request has been sent but has not yet received a reply
365 it is placed on the 'imp_sending_list'. In the event of a failure
366 those will simply be replayed after any recovery has been
367 completed. Finally, there may be requests that the client is delaying
368 before it sends them. This can happen if the client is in a degraded
369 mode, as when it is in recovery after a failure. These requests are
370 put on the 'imp_delayed_list' and not processed until recovery is
371 complete and the 'imp_sending_list' has been replayed.
373 In order to support recovery 'open' requests must be preserved even
374 after they have completed. Those requests are placed on the
375 'imp_committed_list' and the 'imp_replay_cursor' allows for
376 accelerated access to those items.
378 The 'imp_obd' is a reference to the details about the target device
379 that is the subject of this import. There is a lot of state info in
380 there along with many implementation details that are not relevant to
381 the actual Lustre protocol. fixme: I'll want to go through all of the
382 fields in that structure to see which, if any need more
385 The security policy and settings are kept in 'imp_sec', and
386 'imp_sec_mutex' helps manage access to that info. The 'imp_sec_expire'
387 setting is in support of security policies that have an expiration
390 Some processes may need the import to be in a fully connected state in
391 order to proceed. The 'imp_recovery_waitq' is where those threads will
392 wait during recovery.
394 The 'imp_inflight' field counts the number of in-flight requests. It
395 is incremented with each request sent and decremented with each reply
398 The client reserves buffers for the processing of requests and
399 replies, and then informs LNet about those buffers. Buffers may get
400 reused during subsequent processing, but then a point may come when
401 the buffer is no longer going to be used. The client increments the
402 'imp_unregistering' counter and informs LNet the buffer is no longer
403 needed. When LNet has freed the buffer it will notify the client and
404 then the 'imp_unregistering' can be decremented again.
406 During recovery the 'imp_reply_inflight' counts the number of requests
407 from the reply list that have been sent and have not been replied to.
409 The 'imp_inval_count' field counts how many threads are in the process
410 of cleaning up this connection or waiting for cleanup to complete. The
411 cleanup itself may be needed in the case there is an eviction or other
412 problem (fixme what other problem?). The cleanup may involve freeing
413 allocated resources, updating internal state, running replay lists,
414 and invalidating cache. Since it could take a while there may end up
415 multiple threads waiting on this process to complete.
417 The 'imp_timeout' field is a counter that is incremented every time
418 there is a timeout in communication with the target.
420 The 'imp_state' tracks the state of the import. It draws from the
421 enumerated set of values:
423 .enum_lustre_imp_state
427 | LUSTRE_IMP_CLOSED | 1
429 | LUSTRE_IMP_DISCON | 3
430 | LUSTRE_IMP_CONNECTING | 4
431 | LUSTRE_IMP_REPLAY | 5
432 | LUSTRE_IMP_REPLAY_LOCKS | 6
433 | LUSTRE_IMP_REPLAY_WAIT | 7
434 | LUSTRE_IMP_RECOVER | 8
435 | LUSTRE_IMP_FULL | 9
436 | LUSTRE_IMP_EVICTED | 10
438 fixme: what are the transitions between these states? The
439 'imp_state_hist' array maintains a list of the last 16
440 (IMP_STATE_HIST_LEN) states the import was in, along with the time it
441 entered each (fixme: or is it when it left that state?). The list is
442 maintained in a circular manner, so the 'imp_state_hist_idx' points to
443 the entry in the list for the most recently visited state.
445 The 'imp_generation' and 'imp_conn_cnt' fields are monotonically
446 increasing counters. Every time a connection request is sent to the
447 target the 'imp_conn_cnt' counter is incremented, and every time a
448 reply is received for the connection request the 'imp_generation'
449 counter is incremented.
451 The 'imp_last_generation_checked' implements an optimization. When a
452 replay process has successfully traversed the reply list the
453 'imp_generation' value is noted here. If the generation has not
454 incremented then the replay list does not need to be traversed again.
456 During replay the 'imp_last_replay_transno' is set to the transaction
457 number of the last request being replayed, and
458 'imp_peer_committed_transno is set to the 'pb_last_committed' value
459 (of the 'ptlrpc_body') from replies if that value is higher than the
460 previous 'imp_peer_committed_transno'. The 'imp_last_transno_checked'
461 field implements an optimization. It is set to the
462 'imp_last_replay_transno' as its replay is initiated. If
463 'imp_last_transno_checked' is still 'imp_last_replay_transno' and
464 'imp_generation' is still 'imp_last_generation_checked' then there
465 are no additional requests ready to be removed from the replay
466 list. Furthermore, 'imp_last_transno_checked' may no longer be needed,
467 since the committed transactions are now maintained on a separate list.
469 The 'imp_remote_handle' is the handle sent by the target in a
470 connection reply message to uniquely identify the export for this
471 target and client that is maintained on the server. This is the handle
472 used in all subsequent messages to the target.
474 There are two separate ping intervals (fixme: what are the
475 values?). If there are no uncommitted messages for the target then the
476 default ping interval is used to set the 'imp_next_ping' to the time
477 the next ping needs to be sent. If there are uncommitted requests then
478 a "short interval" is used to set the time for the next ping.
480 The 'imp_last_success_conn' value is set to the time of the last
481 successful connection. fixme: The source says it is in 64 bit
482 jiffies, but does not further indicate how that value is calculated.
484 Since there can actually be multiple connection paths for a target
485 (due to failover or multihomed configurations) the import maintains a
486 list of all the possible connection paths in the list pointed to by
487 the 'imp_conn_list' field. The 'imp_conn_current' points to the one
488 currently in use. Compare with the 'imp_connection' fields. They point
489 to different structures, but each is reachable from the other.
491 Most of the flag, state, and list information in the import needs to
492 be accessed atomically. The 'imp_lock' is used to maintain the
493 consistency of the import while it is manipulated by multiple threads.
495 The various flags are documented in the source code and are largely
496 obvious from those short comments, reproduced here:
502 | imp_no_timeout | timeouts are disabled
503 | imp_invalid | client has been evicted
504 | imp_deactive | client administratively disabled
505 | imp_replayable | try to recover the import
506 | imp_dlm_fake | don't run recovery (timeout instead)
507 | imp_server_timeout | use 1/2 timeout on MDSs and OSCs
508 | imp_delayed_recovery | VBR: imp in delayed recovery
509 | imp_no_lock_replay | VBR: if gap was found then no lock replays
510 | imp_vbr_failed | recovery by versions was failed
511 | imp_force_verify | force an immidiate ping
512 | imp_force_next_verify | force a scheduled ping
513 | imp_pingable | target is pingable
514 | imp_resend_replay | resend for replay
515 | imp_no_pinger_recover | disable normal recovery, for test only.
516 | imp_need_mne_swab | need IR MNE swab
517 | imp_force_reconnect | import must be reconnected, not new connection
518 | imp_connect_tried | import has tried to connect with server
520 A few additional notes are in order. The 'imp_dlm_fake' flag signifies
521 that this is not a "real" import, but rather it is a "reverse"import
522 in support of the LDLM. When the LDLM invokes callback operations the
523 messages are initiated at the other end, so there need to a fake
524 import to receive the replies from the operation. Prior to the
525 introduction of adaptive timeouts the servers were given fixed timeout
526 value that were half those used for the clients. The
527 'imp_server_timeout' flag indicated that the import should use the
528 half-sized timeouts, but with the introduction of adaptive timeouts
529 this facility is no longer used. "VBR" is "version based recovery",
530 and it introduces a new possibility for handling requests. Previously,
531 f there were a gap in the transaction number sequence the the requests
532 associated with the missing transaction numbers would be
533 discarded. With VBR those transaction only need to be discarded if
534 there is an actual dependency between the ones that were skipped and
535 the currently latest committed transaction number. fixme: What are the
536 circumstances that would lead to setting the 'imp_force_next_verify'
537 or 'imp_pingable' flags? During recovery, the client sets the
538 'imp_no_pinger_recover' flag, which tells the process to proceed from
539 the current value of 'imp_replay_last_transno'. The
540 'imp_need_mne_swab' flag indicates a version dependent circumstance
541 where swabbing was inadvertently left out of one processing step.
547 An 'obd_export' structure for a given target is created on a server
548 for each client that connects to that target. The exports for all the
549 clients for a given target are managed together. The export represents
550 the connection state between the client and target as well as the
551 current state of any ongoing activity. Thus each pending request will
552 have a reference to the export. The export is discarded if the
553 connection goes away, but only after all the references to it have
554 been cleaned up. The state information for each export is also
555 maintained on disk. In the event of a server failure, that or another
556 server can read the export date from disk to enable recovery.
560 struct portals_handle exp_handle;
561 atomic_t exp_refcount;
562 atomic_t exp_rpc_count;
563 atomic_t exp_cb_count;
564 atomic_t exp_replay_count;
565 atomic_t exp_locks_count;
566 #if LUSTRE_TRACKS_LOCK_EXP_REFS
567 cfs_list_t exp_locks_list;
568 spinlock_t exp_locks_list_guard;
570 struct obd_uuid exp_client_uuid;
571 cfs_list_t exp_obd_chain;
572 cfs_hlist_node_t exp_uuid_hash;
573 cfs_hlist_node_t exp_nid_hash;
574 cfs_list_t exp_obd_chain_timed;
575 struct obd_device *exp_obd;
576 struct obd_import *exp_imp_reverse;
577 struct nid_stat *exp_nid_stats;
578 struct ptlrpc_connection *exp_connection;
580 cfs_hash_t *exp_lock_hash;
581 cfs_hash_t *exp_flock_hash;
582 cfs_list_t exp_outstanding_replies;
583 cfs_list_t exp_uncommitted_replies;
584 spinlock_t exp_uncommitted_replies_lock;
585 __u64 exp_last_committed;
586 cfs_time_t exp_last_request_time;
587 cfs_list_t exp_req_replay_queue;
589 struct obd_connect_data exp_connect_data;
590 enum obd_option exp_flags;
598 exp_req_replay_needed:1,
599 exp_lock_replay_needed:1,
605 enum lustre_sec_part exp_sp_peer;
606 struct sptlrpc_flavor exp_flvr;
607 struct sptlrpc_flavor exp_flvr_old[2];
608 cfs_time_t exp_flvr_expire[2];
609 spinlock_t exp_rpc_lock;
610 cfs_list_t exp_hp_rpcs;
611 cfs_list_t exp_reg_rpcs;
612 cfs_list_t exp_bl_list;
613 spinlock_t exp_bl_list_lock;
615 struct tg_export_data eu_target_data;
616 struct mdt_export_data eu_mdt_data;
617 struct filter_export_data eu_filter_data;
618 struct ec_export_data eu_ec_data;
619 struct mgs_export_data eu_mgs_data;
621 struct nodemap *exp_nodemap;
625 The 'exp_handle' is a little extra information as compared with a
626 'struct lustre_handle', which is just the cookie. The cookie that the
627 server generates to uniquely identify this connection gets put into
628 this structure along with their information about the device in
629 question. This is the cookie the *_CONNECT reply sends back to the
630 client and is then stored int he client's import.
632 The 'exp_refcount' gets incremented whenever some aspect of the export
633 is "in use". The arrival of an otherwise unprocessed message for this
634 target will increment the refcount. A reference by an LDLM lock that
635 gets taken will increment the refcount. Callback invocations and
636 replay also lead to incrementing the 'ref_count'. The next four fields
637 - 'exp_rpc_count', exp_cb_count', and 'exp_replay_count', and
638 'exp_locks_count' - all subcategorize the 'exp_refcount'. The
639 reference counter keeps the export alive while there are any users of
640 that export. The reference counter is also used for debug
641 purposes. Similarly, the 'exp_locks_list' and 'exp_locks_list_guard'
642 are further debug info that list the actual locks accounted for in
645 The 'exp_client_uuid' gives the UUID of the client connected to this
646 export. Fixme: when and how does the UUID get generated?
648 The server maintains all the exports for a given target on a circular
649 list. Each export's place on that list is maintained in the
650 'exp_obd_chain'. A common activity is to look up the export based on
651 the UUID or the nid of the client, and the 'exp_uuid_hash' and
652 'exp_nid_hash' fields maintain this export's place in hashes
653 constructed for that purpose.
655 Exports are also maintained on a list sorted by the last time the
656 corresponding client was heard from. The 'exp_obd_chain_timed' field
657 maintains the export's place on that list. When a message arrives from
658 the client the time is "now" so the export gets put at the end of the
659 list. Since it is circular, the next export is then the oldest. If it
660 has not been heard of within its timeout interval that export is
661 marked for later eviction.
663 The 'exp_obd' points to the 'obd_device' structure for the device that
664 is the target of this export.
666 In the event of an LDLM call-back the export needs to have a the ability to
667 initiate messages back to the client. The 'exp_imp_reverse' provides a
668 "reverse" import that manages this capability.
670 The '/proc' stats for the export (and the target) get updated via the
673 The 'exp_connection' points to the connection information for this
674 export. This is the information about the actual networking pathway(s)
675 that get used for communication.
678 The 'exp_conn_cnt' notes the connection count value from the client at
679 the time of the connection. In the event that more than one connection
680 request is issued before the connection is established then the
681 'exp_conn_cnt' will list the highest value. If a previous connection
682 attempt (with a lower value) arrives later it may be safely
683 discarded. Every request lists its connection count, so non-connection
684 requests with lower connection count values can also be discarded.
685 Note that this does not count how many times the client has connected
686 to the target. If a client is evicted the export is deleted once it
687 has been cleaned up and its 'exp_ref_count' reduced to zero. A new
688 connection from the client will get a new export.
690 The 'exp_lock_hash' provides access to the locks granted to the
691 corresponding client for this target. If a lock cannot be granted it
692 is discarded. A file system lock ("flock") is also implemented through
693 the LDLM lock system, but not all LDLM locks are flocks. The ones that
694 are flocks are gathered in a hash 'exp_flock_hash'. This supports
697 For those requests that initiate file system modifying transactions
698 the request and its attendant locks need to be preserved until either
699 a) the client acknowleges recieving the reply, or b) the transaction
700 has been committed locally. This ensures a request can be replayed in
701 the event of a failure. The LDLM lock is being kept until one of these
702 event occurs to prevent any other modifications of the same object.
703 The reply is kept on the 'exp_outstanding_replies' list until the LNet
704 layer notifies the server that the reply has been acknowledged. A reply
705 is kept on the 'exp_uncommitted_replies' list until the transaction
706 (if any) has been committed.
708 The 'exp_last_committed' value keeps the transaction number of the
709 last committed transaction. Every reply to a client includes this
710 value as a means of early-as-possible notification of transactions that
713 The 'exp_last_request_time' is self explanatory.
715 During reply a request that is waiting for reply is maintained on the
716 list 'exp_req_replay_queue'.
718 The 'exp_lock' spin-lock is used for access control to the exports
719 flags, as well as the 'exp_outstanding_replies' list and the revers
722 The 'exp_connect_data' refers to an 'obd_connect_data' structure for
723 the connection established between this target and the client this
724 export refers to. See also the corresponding entry in the import and
725 in the connect messages passed between the hosts.
727 The 'exp_flags' field encodes three directives as follows:
730 OBD_OPT_FORCE = 0x0001,
731 OBD_OPT_FAILOVER = 0x0002,
732 OBD_OPT_ABORT_RECOV = 0x0004,
735 fixme: Are the set for some exports and a condition of their
736 existence? or do they reflect a transient state the export is passing
739 The 'exp_failed' flag gets set whenever the target has failed for any
740 reason or the export is otherwise due to be cleaned up. Once set it
741 will not be unset in this export. Any subsequent connection between
742 the client and the target would be governed by a new export.
744 After a failure export data is retrieved from disk and the exports
745 recreated. Exports created in this way will have their
746 'exp_in_recovery' flag set. Once any outstanding requests and locks
747 have been recovered for the client, then the export is recovered and
748 'exp_in_recovery' can be cleared. When all the client exports for a
749 given target have been recovered then the target is considered
750 recovered, and when all targets have been recovered the server is
751 considered recovered.
753 A *_DISCONNECT message from the client will set the 'exp_disconnected'
754 flag, as will any sort of failure of the target. Once set the export
755 will be cleaned up and deleted.
757 When a *_CONNECT message arrives the 'exp_connecting' flag is set. If
758 for some reason a second *_CONNECT request arrives from the client it can
759 be discarded when this flag is set.
761 The 'exp_delayed' flag is no longer used. In older code it indicated
762 that recovery had not completed in a timely fashion, but that a tardy
763 recovery would still be possible, since there were no dependencies on
766 The 'exp_vbr_failed' flag indicates a failure during the recovery
767 process. See <<recovery>> for a more detailed discussion of recovery
768 and transaction replay. For a file system modifying request, the
769 server composes its reply including the 'pb_pre_versions' entries in
770 'ptlrpc_body', which indicate the most recent updates to the
771 object. The client updates the request with the 'pb_transno' and
772 'pb_pre_versions' from the reply, and keeps that request until the
773 target signals that the transaction has been committed to disk. If the
774 client times-out without that confirmation then it will 'replay' the
775 request, which now includes the 'pb_pre_versions' information. During
776 a replay the target checks that the object has the same version as
777 'pb_pre_versions' in replay. If this check fails then the object can't
778 be restored in the same state as it was in before failure. Usually that
779 happens if the recovery process fails for the connection between some
780 other client and this target, so part of change needed for this client
781 wasn't restored. At that point the 'exp_vbr_failed' flag is set
782 to indicate version based recovery failed. This will lead to the client
783 being evicted and this export being cleaned up and deleted.
785 At the start of recovery both the 'exp_req_replay_needed' and
786 'exp_lock_replay_needed' flags are set. As request replay is completed
787 the 'exp_req_replay_needed' flag is cleared. As lock replay is
788 completed the 'exp_lock_replay_needed' flag is cleared. Once both are
789 cleared the 'exp_in_recovery' flag can be cleared.
791 The 'exp_need_sync' supports an optimization. At mount time it is
792 likely that every client (potentially thousands) will create an export
793 and that export will need to be saved to disk synchronously. This can
794 lead to an unusually high and poorly performing interaction with the
795 disk. When the export is created the 'exp_need_sync' flag is set and
796 the actual writing to disk is delayed. As transactions arrive from
797 clients (in a much less coordinated fashion) the 'exp_need_sync' flag
798 indicates a need to have the export data on disk before proceeding
799 with a new transaction, so as it is next updated the transaction is
800 done synchronously to commit all changes on disk. At that point the
801 flag is cleared (except see below).
803 In DNE (phase I) the export for an MDT managing the connection from
804 another MDT will want to always keep the 'exp_need_sync' flag set. For
805 that special case such an export sets the 'exp_keep_sync', which then
806 prevents the 'exp_need_sync' flag from ever being cleared. This will
807 no longer be needed in DNE Phase II.
809 The 'exp_flvr_changed' and 'exp_flvr_adapt' flags along with
810 'exp_sp_peer', 'exp_flvr', 'exp_flvr_old', and 'exp_flvr_expire'
811 fields are all used to manage the security settings for the
812 connection. Security is discussed in the <<security>> section. (fixme:
815 The 'exp_libclient' flag indicates that the export is for a client
816 based on "liblustre". This allows for simplified handling on the
817 server. (fixme: how is processing simplified? It sounds like I may
818 need a whole special section on liblustre.)
820 The 'exp_need_mne_swab' flag indicates the presence of an old bug that
821 affected one special case of failed swabbing. It is not part of
824 As RPCs arrive they are first subjected to triage. Each request is
825 placed on the 'exp_hp_rpcs' list and examined to see if it is high
826 priority (PING, truncate, bulk I/O). If it is not high priority then
827 it is moved to the 'exp_reg_prcs' list. The 'exp_rpc_lock' protects
828 both lists from concurrent access.
830 All arriving LDLM requests get put on the 'exp_bl_list' and access to
831 that list is controlled via the 'exp_bl_list_lock'.
833 The union provides for target specific data. The 'eu_target_data' is
834 for a common core of fields for a generic target. The others are
835 specific to particular target types: 'eu_mdt_data' for MDTs,
836 'eu_filter_data' for OSTs, 'eu_ec_data' for an "echo client" (fixme:
837 describe what an echo client is somewhere), and 'eu_mgs_data' is for
840 The 'exp_bl_lock_at' field supports adaptive timeouts which will be
841 discussed separately. (fixme: so discuss it somewhere.)
846 Each export maintains a connection count.