1 ******************************************************
2 * Overview of the Lustre Client I/O (CLIO) subsystem *
3 ******************************************************
7 Nikita Danilov <Nikita_Danilov@xyratex.com>
16 ii. Top-{object,lock,page}, Sub-{object,lock,page}
18 1.3. Main Differences with the Pre-CLIO Client Code
19 1.4. Layered objects, Slices, Headers
26 2.1. VVP, SLP, Echo-client
27 2.2. LOV, LOVSUB (layouts)
30 3.1. FID, Hashing, Caching, LRU
31 3.2. Top-object, Sub-object
32 3.3. Object Operations
33 3.4. Object Attributes
37 4.3. Page Transfer Locking
41 5.2. Top-lock and Sub-locks
42 5.3. Lock State Machine
45 5.6. Use Case: Lock Invalidation
50 6.4. Data-flow: From Stack to IO Slice
52 7.1. Immediate vs. Opportunistic Transfers
54 7.3. Transfer States: Prepare, Completion
55 7.4. Page Completion Handlers, Synchronous Transfer
56 8. LU Environment (lu_env)
57 8.1. Motivation, Server Environment Usage
62 9.2. First IO to a File
66 9.4. Lock-less and No-cache IO
75 CLIO is a re-write of interfaces between layers in the client data-path (read,
76 write, truncate). Its goals are:
78 - Reduce the number of bugs in the IO path;
80 - Introduce more logical layer interfaces instead of current all-in-one OBD
83 - Define clear and precise semantics for the interface entry points;
85 - Simplify the structure of the client code.
87 - Support upcoming features:
91 . parallel non-blocking IO,
94 - Reduce stack consumption.
98 - No meta-data changes;
99 - No support for 2.4 kernels;
101 - No changes to recovery;
102 - The same layers with mostly the same functionality;
103 - As few changes to the core logic of each Lustre data-stack layer as possible
104 (e.g., no changes to the read-ahead or OSC RPC logic).
109 Any discussion of client functionality has to talk about `read' and `write'
110 system calls on the one hand and about `read' and `write' requests to the
111 server on the other hand. To avoid confusion, the former high level operations
112 are called `IO', while the latter are called `transfer'.
114 Many concepts apply uniformly to pages, locks, files, and IO contexts, such as
115 reference counting, caching, etc. To describe such situations, a common term is
116 needed to denote things from any of the above classes. `Object' would be a
117 natural choice, but files and especially stripes are already called objects, so
118 the term `entity' is used instead.
120 Due to the striping it's often a case that some entity is composed of multiple
121 entities of the same kind: a file is composed of stripe objects, a logical lock
122 on a file is composed of stripe locks on the file's stripes, etc. In these
123 cases we shall talk about top-object, top-lock, top-page, top-IO, etc. being
124 constructed from sub-objects, sub-locks, sub-pages, sub-io's respectively.
126 The topmost module in the Linux client, is traditionally known as `llite'. The
127 corresponding CLIO layer is called `VVP' (VFS, VM, POSIX) to reflect its
128 functional responsibilities. The top-level layer for liblustre is called `SLP'.
129 VVP and SLP share a lot of logic and data-types. Their common functions and
130 types are prefixed with `ccc' which stands for "Common Client Code".
132 1.3. Main Differences with the Pre-CLIO Client Code
133 ===================================================
135 - Locks on files (as opposed to locks on stripes) are first-class objects (i.e.
138 - Sub-objects (stripes) are first-class objects;
140 - Stripe-related logic is moved out of llite (almost);
142 - IO control flow is different:
144 . Pre-CLIO: llite implements control flow, calling underlying OBD
145 methods as necessary;
147 . CLIO: generic code (cl_io_loop()) controls IO logic calling all
148 layers, including VVP.
150 In other words, VVP (or any other top-layer) instead of calling some
151 pre-existing `lustre interface', also implements parts of this interface.
153 - The lu_env allocator from MDT is used on the client.
158 CLIO continues the layered object approach that was found to be useful for the
159 MDS rewrite in Lustre 2.0. In this approach, instances of key entity types
160 (files, pages, locks, etc.) are represented as a header, containing attributes
161 shared by all layers. Each header contains a linked list of per-layer `slices',
162 each of which contains a pointer to a vector of function pointers. Generic
163 operations on layered objects are implemented by going through the list of
164 slices and invoking the corresponding function from the operation vector at
165 every layer. In this way generic object behavior is delegated to the layers.
167 For example, a page entity is represented by struct cl_page, from which hangs
168 off a list of cl_page_slice structures, one for each layer in the stack.
169 cl_page_slice contains a pointer to struct cl_page_operations.
170 cl_page_operations has the field
172 void (*cpo_completion)(const struct lu_env *env,
173 const struct cl_page_slice *slice, int ioret);
175 When transfer of a page is finished, ->cpo_completion() methods are called in a
176 particular order (bottom to top in this case).
178 Allocation of slices is done during instance creation. If a layer needs some
179 private state for an object, it embeds the slice into its own data structure.
180 For example, the OSC layer defines
183 struct cl_lock_slice ols_cl;
184 struct ldlm_lock *ols_lock;
188 When an operation from cl_lock_operations is called, it is given a pointer to
189 struct cl_lock_slice, and the layer casts it to its private structure (for
190 example, struct osc_lock) to access per-layer state.
192 The following types of layered objects exist in CLIO:
194 - File system objects (files and stripes): struct cl_object_header, slices
195 are of type struct cl_object;
197 - Cached pages with data: struct cl_page, slices are of type
200 - Extent locks: struct cl_lock, slices are of type cl_lock_slice;
202 - IO content: struct cl_io, slices are of type cl_io_slice;
204 - Transfer request: struct cl_req, slices are of type cl_req_slice.
209 Entities with different sequences of slices can co-exist. A typical example of
210 this is a local vs. remote object on the MDS. A local object, based on some
211 file in the local file system has MDT, MDD, LOD and OSD as its layers, whereas
212 a remote object (representing an object local to some other MDT) has MDT, MDD,
215 When the client is being mounted, its device stack is configured according to
216 llog configuration records. The typical configuration is
226 .... osc_device's ....
228 In this tree every node knows its descendants. When a new file (inode) is
229 created, every layer, starting from the top, creates a slice with a state and
230 an operation vector for this layer, appends this slice to the tail of a list
231 anchored at the object header, and then calls the corresponding lower layer
232 device to do the same. That is, the file object structure is determined by the
233 configuration of devices to which this file belongs.
235 Pages and locks, in turn, belong to the file objects, and when a new page is
236 created for a given object, slices of this object are iterated through and
237 every slice is asked to initialize a new page, which includes (usually)
238 allocation of a new page slice and its insertion into a list of page slices.
239 Locks and IO context instantiation is handled similarly.
244 All layered objects except IO contexts and transfer requests (which leaves file
245 objects, pages and locks) are reference counted and cached. They have a uniform
248 - Objects are kept in some sort of an index (global FID hash for file objects,
249 per-file radix tree for pages, and per-file list for locks);
251 - A reference for an object can be acquired by cl_{object,page,lock}_find()
252 functions that search the index, and if object is not there, create a new one
253 and insert it into the index;
255 - A reference is released by cl_{object,page,lock}_put() functions. When the
256 last reference is released, the object is returned to the cache (still in the
257 index), except when the user explicitly set `do not cache' flag for this
258 object. In the latter case the object is destroyed immediately.
260 IO contexts are owned by a thread (or, potentially a group of threads) doing
261 IO, and need neither reference counting nor indexing. Similarly, transfer
262 requests are owned by a OSC device, and their lifetime is from RPC creation
263 until completion notification.
268 All types of layered objects contain a state-machine, although for the transfer
269 requests this machine is trivial (CREATED -> PREPARED -> INFLIGHT ->
270 COMPLETED), and for the file objects it is very simple. Page, lock, and IO
271 state machines are described in more detail below.
273 As a generic rule, state machine transitions are made under some kind of lock:
274 VM lock for a page, a per-lock mutex for a cl_lock, and LU site spin-lock for
275 an object. After some event that might cause a state transition, such lock is
276 taken, and the object state is analysed to check whether transition is
277 possible. If it is, the state machine is advanced to the new state and the
278 lock is released. IO state transitions do not require concurrency control.
283 State machine and reference counting interact during object destruction. In
284 addition to temporary pointers to an entity (that are counted in its reference
285 counter), an entity is reachable through
287 - Indexing structures described above
289 - Pointers internal to some layer of this entity. For example, a page is
290 reachable through a pointer from VM page, lock might be reachable through a
291 ldlm_lock::l_ast_data pointer, and sub-{lock,object,page} might be reachable
292 through a pointer from its top-entity.
294 Entity destruction happens in three phases:
296 - First, a decision is made to destroy an entity, when, for example, a lock is
297 cancelled, or a page is truncated from a file. At this point the `do not
298 cache' bit is set in the entity header, and all ways to reach entity from
299 internal pointers are severed.
301 cl_{page,lock,object}_get() functions never return an entity with the `do not
302 cache' bit set, so from this moment no new internal pointers can be
305 See: cl_page_delete(), cl_lock_delete();
307 - Pointers `drain' for some time as existing references are released. In
308 this phase the entity is reachable through
310 . temporary pointers, counted in its reference counter, and
311 . possibly a pointer in the indexing structure.
313 - When the last reference is released, the entity can be safely freed (after
314 possibly removing it from the index).
316 See lu_object_put(), cl_page_put(), cl_lock_put().
321 The CLIO code resides in the following files:
323 {llite,lov,osc}/*_{dev,object,lock,page,io}.c
329 All global CLIO data-types are defined in include/cl_object.h header which
330 contains detailed documentation. Generic clio code is in
331 obdclass/cl_{object,page,lock,io}.c
333 An implementation of CLIO interfaces for a layer foo is located in
334 foo/foo_{dev,object,page,lock,io}.c files, with (temporary) exception of
335 liblustre code that is located in liblustre/llite_cl.c.
337 Definitions of data-structures shared within a layer are in
338 foo/foo_cl_internal.h
344 This section briefly outlines responsibility of every layer in the stack. More
345 detailed description of functionality is in the following sections on objects,
348 2.1. VVP, SLP, Echo-client
349 ==========================
351 There are currently 3 options for the top-most Lustre layer:
353 - VVP: linux kernel client,
354 - SLP: liblustre client, and
355 - echo-client: special client used by the Lustre testing sub-system.
357 Other possibilities are:
359 - Client ports to other operating systems (OSX, Windows, Solaris),
360 - pNFS and NFS exports.
362 The responsibilities of the top-most layer include:
364 - Definition of the entry points through which Lustre is accessed by the
366 - Interaction with the hosting VM/MM system;
367 - Interaction with the hosting VFS or equivalent;
368 - Implementation of the desired semantics of top of Lustre (e.g. POSIX
371 Let's look at VVP in more detail. First, VVP implements VFS entry points
372 required by the Linux kernel interface: ll_file_{read,write,sendfile}(). Then,
373 VVP implements VM entry points: ll_{write,invalidate,release}page().
375 For file objects, VVP slice (ccc_object, shared with liblustre) contains a
378 For pages, the VVP slice (ccc_page) contains a pointer to the VM page
379 (cfs_page_t), a `defer up to date' flag to track read-ahead hits (similar to
380 the pre-CLIO client), and fields necessary for synchronous transfer (see
381 below). VVP is responsible for implementation of the interaction between
382 client page (cl_page) and the VM.
384 There is no special VVP private state for locks.
386 For IO, VVP implements
388 - Mapping from Linux specific entry points (readv, writev, sendfile, etc.)
393 - POSIX features like short reads, O_APPEND atomicity, etc.
395 - Read-ahead (this is arguably not the best layer in which to implement
396 read-ahead, as the existing read-ahead algorithm is network-aware).
401 The LOV layer implements RAID-0 striping. It maps top-entities (file objects,
402 locks, pages, IOs) to one or more sub-entities. LOVSUB is a companion layer
403 that does the reverse mapping.
408 The OSC layer deals with networking stuff:
410 - It decides when an efficient RPC can be formed from cached data;
412 - It calls LNET to initiate a transfer and to get notification of completion;
414 - It calls LDLM to implement distributed cache coherency, and to get
415 notifications of lock cancellation requests;
421 3.1. FID, Hashing, Caching, LRU
422 ===============================
424 Files and stripes are collectively known as (file system) `objects'. The CLIO
425 client reuses support for layered objects from the MDT stack. Both client and
426 MDT objects are based on struct lu_object type, representing a slice of a file
427 system object. lu_object's for a given object are linked through the
428 ->lo_linkage field into a list hanging off field ->loh_layers of struct
429 lu_object_header, that represents a whole layered object.
431 lu_object and lu_object_header provide functionality common between a client
434 - An object is uniquely identified by a FID; all objects are kept in a hash
435 table indexed by a FID;
437 - Objects are reference counted. When the last reference to an object is
438 released it is returned back into the cache, unless it has been explicitly
439 marked for deletion, in which case it is immediately destroyed;
441 - Objects in the cache are kept in a LRU list that is scanned to keep cache
444 On the MDT, lu_object is wrapped into struct md_object where additional state
445 that all server-side objects have is stored. Similarly, on a client, lu_object
446 and lu_object_header are embedded into struct cl_object and struct
447 cl_object_header where additional client state is stored.
449 cl_object_header contains following additional state:
451 - ->coh_tree: a radix tree of cached pages for this object. In this tree pages
452 are indexed by their logical offset from the beginning of this object. This
453 tree is protected by ->coh_page_guard spin-lock;
455 - ->coh_locks: a double-linked list of all locks for this object. Locks in all
456 possible states (see Locks section below) reside in this list without any
459 3.2. Top-object, Sub-object
460 ===========================
462 An important distinction from the server side, where md_object and dt_object
463 are used, is that cl_object "fans out" at the LOV level: depending on the file
464 layout, a single file is represented as a set of "sub-objects" (stripes). At
465 the implementation level, struct lov_object contains an array of cl_objects.
466 Each sub-object is a full-fledged cl_object, having its FID and living in the
467 LRU and hash table. Each sub-object has its own radix tree of pages, and its
470 This leads to the next important difference with the server side: on the
471 client, it's quite usual to have objects with the different sequence of layers.
472 For example, typical top-object is composed of the following layers:
477 whereas its sub-objects are composed of layers:
482 Here "LOVSUB" is a mostly dummy layer, whose purpose is to keep track of the
483 object-subobject relationship:
485 cl_object_header-+--->radix tree of pages
496 cl_object_header | . . .
499 . cl_object_header-+--->radix tree of pages
507 Sub-objects are not cached independently: when top-object is about to be
508 discarded from the memory, all its sub-objects are torn-down and destroyed too.
510 3.3. Object Operations
511 ======================
513 In addition to the lu_object_operations vector, each cl_object slice has
514 cl_object_operations. lu_object_operations deals with object creation and
515 destruction of objects. Client specific cl_object_operations fall into two
518 - Creation of dependent entities: these are ->coo_{page,lock,io}_init()
519 methods called at every layer when a new page, lock or IO context are being
522 - Object attributes: ->coo_attr_{get,set}() methods that are called to get or
523 set common client object attributes (struct cl_attr): size, [mac]times, etc.
525 3.4. Object Attributes
526 ======================
528 A cl_object has a set of attributes defined by struct cl_attr. Attributes
529 include object size, object known-minimum-size (KMS), access, change and
530 modification times and ownership identifiers. Description of KMS is beyond the
531 scope of this document, refer to the (non-)existent Lustre documentation on the
534 Both top-objects and sub-objects have attributes. Consistency of the attributes
535 is protected by a lock on the top-object, accessible through
536 cl_object_attr_{un,}lock() calls. This allows a sub-object and its top-object
537 attributes to be changed atomically.
539 Attributes are accessible through cl_object_attr_{g,s}et() functions that call
540 per-layer ->coo_attr_{s,g}et() object methods. Top-object attributes are
541 calculated from the sub-object ones by lov_attr_get() that optimizes for the
542 case when none of sub-object attributes have changed since the last call to
545 As a further potential optimization, one can recalculate top-object attributes
546 at the moment when any sub-object attribute is changed. This would allow to
547 avoid collecting cumulative attributes over all sub-objects. To implement this
548 optimization _all_ changes of sub-object attributes must go through
549 cl_object_attr_set().
555 A cl_page represents a portion of a file, cached in the memory. All pages of
556 the given file are of the same size, and are kept in the radix tree hanging off
557 the cl_object_header.
559 A cl_page is associated with a VM page of the hosting environment (struct page
560 in the Linux kernel, for example), cfs_page_t. It is assumed that this
561 association is implemented by one of cl_page layers (top layer in the current
564 - intercepts per-VM-page call-backs made by the host environment (e.g., memory
567 - translates state (page flag bits) and locking between lustre and the host
570 The association between cl_page and cfs_page_t is immutable and established
571 when cl_page is created. It is possible to imagine a setup where different
572 pages get their backing VM buffers from different sources. For example, in the
573 case if pNFS export, some pages might be backed by local DMU buffers, while
574 others (representing data in remote stripes), by normal VM pages.
579 Pages within a given object are linearly ordered. The page index is stored in
580 the ->cpo_index field. In a typical Lustre setup, a top-object has an array of
581 sub-objects, and every page in a top-object corresponds to a page in one of its
582 sub-objects. This second page (a sub-page of a first), is a first class
583 cl_page, and, in particular, it is inserted into the sub-object's radix tree,
584 where it is indexed by its offset within the sub-object. Sub-page and top-page
585 are linked together through the ->cp_child and ->cp_parent fields in struct
588 +------>radix tree of pages
592 cl_object_header<------------cl_page<-----------------+
595 inode<----ccc_object<---------------ccc_page---->cfs_page_t |
598 lov_object<---------------lov_page |
599 | ->cpl_obj | ->cp_parent
604 | +------>radix tree of pages |
608 cl_object_header<-----------------cl_page<----------------+
611 lovsub_object<-----------------lovsub_page
614 osc_object<--------------------osc_page
620 A cl_page can be "owned" by a particular cl_io (see below), guaranteeing this
621 IO an exclusive access to this page with regard to other IO attempts and
622 various events changing page state (such as transfer completion, or eviction of
623 the page from memory). Note, that in general a cl_io cannot be identified with
624 a particular thread, and page ownership is not exactly equal to the current
625 thread holding a lock on the page. The layer implementing the association
626 between cl_page and cfs_page_t has to implement ownership on top of available
627 synchronization mechanisms.
629 While the Lustre client maintains the notion of page ownership by IO, the
630 hosting MM/VM usually has its own page concurrency control mechanisms. For
631 example, in Linux, page access is synchronized by the per-page PG_locked
632 bit-lock, and generic kernel code (generic_file_*()) takes care to acquire and
633 release such locks as necessary around the calls to the file system methods
634 (->readpage(), ->prepare_write(), ->commit_write(), etc.). This leads to the
635 situation when there are two different ways to own a page in the client:
637 - Client code explicitly and voluntary owns the page (cl_page_own());
639 - The hosting VM locks a page and then calls the client, which has to "assume"
640 ownership from the VM (cl_page_assume()).
642 Dual methods to release ownership are cl_page_disown() and cl_page_unassume().
644 4.3. Page Transfer Locking
645 ==========================
647 cl_page implements a simple locking design: as noted above, a page is protected
648 by a VM lock while IO owns it. The same lock is kept while the page is in
649 transfer. Note that this is different from the standard Linux kernel behavior
650 where page write-out is protected by a lock (PG_writeback) separate from VM
651 lock (PG_locked). It is felt that this single-lock design is more portable and,
652 moreover, Lustre cannot benefit much from a separate write-out lock due to LDLM
658 See documentation for cl_object.h:cl_page_operations. See cl_page state
659 descriptions in documentation for cl_object.h:cl_page_state.
665 A struct cl_lock represents an extent lock on cached file or stripe data.
666 cl_lock is used only to maintain distributed cache coherency and provides no
667 intra-node synchronization. It should be noted that, as with other Lustre DLM
668 locks, cl_lock is actually a lock _request_, rather than a lock itself.
670 As locks protect cached data, and the unit of data caching is a page, locks are
676 Locks for a given file are cached in a per-file doubly linked list. The overall
677 lock life cycle is as following:
679 - The lock is created in the CLS_NEW state. At this moment the lock doesn't
680 actually protect anything;
682 - The Lock is enqueued, that is, sent to server, passing through the
683 CLS_QUEUING state. In this state multiple network communications with
684 multiple servers may occur;
686 - Once fully enqueued, the lock moves into the CLS_ENQUEUED state where it
687 waits for a final reply from the server or servers;
689 - When a reply granting this lock is received, the lock moves into the CLS_HELD
690 state. In this state the lock protects file data, and pages in the lock
691 extent can be cached (and dirtied for a write lock);
693 - When the lock is not actively used, it is `unused' and, moving through the
694 CLS_UNLOCKING state, lands in the CLS_CACHED state. In this state the lock
695 still protects cached data. The difference with CLS_HELD state is that a lock
696 in the CLS_CACHED state can be cancelled;
698 - Ultimately, the lock is either cancelled, or destroyed without cancellation.
699 In any case, it is moved in CLS_FREEING state and eventually freed.
701 A lock can be cancelled by a client either voluntarily (in reaction to memory
702 pressure, by explicit user request, or as part of early cancellation), or
703 involuntarily, when a blocking AST arrives.
705 A lock can be destroyed without cancellation when its object is destroyed
706 (there should be no cached data at this point), or during eviction (when
707 cached data are invalid);
709 - If an unrecoverable error occurs at any point (e.g., due to network timeout,
710 or a server's refusal to grant a lock), the lock is moved into the
713 The description above matches the slow IO path. In the common fast path there
714 is already a cached lock covering the extent which the IO is against. In this
715 case, the cl_lock_find() function finds the cached lock. If the found lock is
716 in the CLS_HELD state, it can be used for IO immediately. If the found lock is
717 in CLS_CACHED state, it is removed from the cache and transitions to CLS_HELD.
718 If the lock is in the CLS_QUEUING or CLS_ENQUEUED state, some other IO is
719 currently in the process of enqueuing it, and the current thread helps that
720 other thread by continuing the enqueue operation.
722 The actual process of finding a lock in the cache is in fact more involved than
723 the above description, because there are cases when a lock matching the IO
724 extent and mode still cannot be used for this IO. For example, locks covering
725 multiple stripes cannot be used for regular IO, due to the danger of cascading
726 evictions. For such situations, every layer can optionally define
727 cl_lock_operations::clo_fits_into() method that might declare a given lock
728 unsuitable for a given IO. See lov_lock_fits_into() as an example.
730 5.2. Top-lock and Sub-locks
731 ===========================
733 A top-lock protects cached pages of a top-object, and is based on a set of
734 sub-locks, protecting cached pages of sub-objects:
736 +--------->list of locks
739 cl_object_header<------------cl_lock
742 ccc_object<---------------ccc_lock
745 lov_object<---------------lov_lock
747 +---+---+---+---+ +---+---+---+---+
751 | +-------------->list of locks |
754 cl_object_header<----------------------cl_lock
757 lovsub_object<---------------------lovsub_lock
760 osc_object<------------------------osc_lock
763 When a top-lock is created, it creates sub-locks based on the striping method
764 (RAID0 currently). Sub-locks are `created' in the same manner as top-locks: by
765 calling cl_lock_find() function to go through the lock cache. To enqueue a
766 top-lock all of its sub-locks have to be enqueued also, with ordering
767 constraints defined by enqueue options:
769 - To enqueue a regular top-lock, each sub-lock has to be enqueued and granted
770 before the next one can be enqueued. This is necessary to avoid deadlock;
772 - For `try-lock' style top-lock (e.g., a glimpse request, or O_NONBLOCK IO
773 locks), requests can be enqueued in parallel, because dead-lock is not
774 possible in this case.
776 Sub-lock state depends on its top-lock state:
778 - When top-lock is being enqueued, its sub-locks are in QUEUING, ENQUEUED,
781 - When a top-lock is in HELD state, its sub-locks are in HELD state too;
783 - When a top-lock is in CACHED state, its sub-locks are in CACHED state too;
785 - When a top-lock is in FREEING state, it detaches itself from all sub-locks,
786 and those are usually deleted too.
788 A sub-lock can be cancelled while its top-lock is in CACHED state. To maintain
789 an invariant that CACHED lock is immediately ready for re-use by IO, the
790 top-lock is moved into NEW state. The next attempt to use this lock will
791 enqueue it again, resulting in the creation and enqueue of any missing
792 sub-locks. As follows from the description above, the top-lock provides
793 somewhat weaker guarantees than one might expect:
795 - Some of its sub-locks can be missing, and
797 - Top-lock does not necessarily protect the whole of its extent.
799 In other words, a top-lock is potentially porous, and in effect, it is just a
800 hint, describing what sub-locks are likely to exist. Nonetheless, in the most
801 important cases of a file per client, and of clients working in the disjoint
802 areas of a shared file this hint is precise.
804 5.3. Lock State Machine
805 =======================
807 A cl_lock is a state machine. This requires some clarification. One of the
808 goals of CLIO is to make IO path non-blocking, or at least to make it easier to
809 make it non-blocking in the future. Here `non-blocking' means that when a
810 system call (read, write, truncate) reaches a situation where it has to wait
811 for a communication with the server, it should--instead of waiting--remember
812 its current state and switch to some other work. That is, instead of waiting
813 for a lock enqueue, the client should proceed doing IO on the next stripe, etc.
814 Obviously this is rather radical redesign, and it is not planned to be fully
815 implemented at this time. Instead we are putting some infrastructure in place
816 that would make it easier to do asynchronous non-blocking IO in the future.
817 Specifically, where the old locking code goes to sleep (waiting for enqueue,
818 for example), the new code returns cl_lock_transition::CLO_WAIT. When the
819 enqueue reply comes, its completion handler signals that the lock state-machine
820 is ready to move to the next state. There is some generic code in cl_lock.c
821 that sleeps, waiting for these signals. As a result, for users of this
822 cl_lock.c code, it looks like locking is done in the normal blocking fashion,
823 and at the same time it is possible to switch to the non-blocking locking
824 (simply by returning cl_lock_transition::CLO_WAIT from cl_lock.c functions).
826 For a description of state machine states and transitions see enum
829 There are two ways to restrict a set of states which a lock might move to:
831 - Placing a "hold" on a lock guarantees that the lock will not be moved into
832 cl_lock_state::CLS_FREEING state until the hold is released. A hold can only
833 be acquired on a lock that is not in cl_lock_state::CLS_FREEING. All holds on
834 a lock are counted in cl_lock::cll_holds. A hold protects the lock from
835 cancellation and destruction. Requests to cancel and destroy a lock on hold
836 will be recorded, but only honoured when the last hold on a lock is released;
838 - Placing a "user" on a lock guarantees that lock will not leave the set of
839 states cl_lock_state::CLS_NEW, cl_lock_state::CLS_QUEUING,
840 cl_lock_state::CLS_ENQUEUED and cl_lock_state::CLS_HELD, once it enters this
841 set. That is, if a user is added onto a lock in a state not from this set, it
842 doesn't immediately force the lock to move to this set, but once the lock
843 enters this set it will remain there until all users are removed. Lock users
844 are counted in cl_lock::cll_users.
846 A user is used to assure that the lock is not cancelled or destroyed while it
847 is being enqueued or actively used by some IO.
849 Currently, a user always comes with a hold (cl_lock_invariant() checks that a
850 number of holds is not less than a number of users).
852 Lock "users" are used by the top-level IO code to guarantee that a lock is not
853 cancelled when IO it protects is going on. Lock "holds" are used by a top-lock
854 (LOV code) to guarantee that its sub-locks are in an expected state.
856 5.4. Lock Concurrency
857 =====================
859 The following describes how the lock state-machine operates. The fields of
860 struct cl_lock are protected by the cl_lock::cll_guard mutex.
862 - The mutex is taken, and cl_lock::cll_state is examined.
864 - For every state there are possible target states which the lock can move
865 into. They are tried in order. Attempts to move into the next state are
866 done by _try() functions in cl_lock.c:cl_{enqueue,unlock,wait}_try().
868 - If the transition can be performed immediately, the state is changed and the
871 - If the transition requires blocking, the _try() function returns
872 cl_lock_transition::CLO_WAIT. The caller unlocks the mutex and goes to sleep,
873 waiting for the possibility of a lock state change. It is woken up when some
874 event occurs that makes lock state change possible (e.g., the reception of
875 the reply from the server), and repeats the loop.
877 Top-lock and sub-lock have separate mutices and the latter has to be taken
878 first to avoid deadlock.
880 To see an example of interaction of all these issues, take a look at the
881 lov_cl.c:lov_lock_enqueue() function. It is called as part of cl_enqueue_try(),
882 and tries to advance top-lock to the ENQUEUED state by advancing the
883 state-machines of its sub-locks (lov_lock_enqueue_one()). Note also that it
884 uses trylock to take the sub-lock mutex to avoid deadlock. It also has to
885 handle CEF_ASYNC enqueue, when sub-locks enqueues have to be done in parallel
886 (this is used for glimpse locks which cannot deadlock).
888 +------------------>NEW
893 | +--------------QUEUING (*)
895 | | | cl_enqueue_try()
898 sub-lock | +-------------ENQUEUED (*)
906 | | cl_unuse_try() | |
909 | +------------>UNLOCKING (*) | lock found
915 +------------------CACHED---------+
922 5.5. Shared Sub-locks
923 =====================
925 For various reasons, the same sub-lock can be shared by multiple top-locks. For
926 example, a large sub-lock can match multiple small top-locks. In general, a
927 sub-lock keeps a list of all its parents, and propagates certain events to
928 them, e.g., as described above, when a sub-lock is cancelled, it moves _all_ of
929 its top-locks from CACHED to NEW state.
931 This leads to a curious situation, when an operation on some top-lock (e.g.,
932 enqueue), changes state of one of its sub-locks, and this change has to be
933 propagated to the other top-locks of this sub-lock. The resulting locking
934 pattern is top->bottom->top, which is obviously not deadlock safe. To avoid
935 deadlocks, try-locking is used in such situations. See
936 cl_object.h:cl_lock_closure documentation for details.
938 5.6. Use Case: Lock Invalidation
939 ================================
941 To demonstrate how objects, pages and lock data-structures interact, let's look
942 at the example of stripe lock invalidation.
944 Imagine that on the client C0 there is a file object F, striped over stripes
945 S0, S1 and S2 (files and stripes are represented by cl_object_header). Further,
946 there is a write lock LF, for the extent [a, b] (recall that lock extents are
947 measured in pages) of file F. This lock is based on write sub-locks LS0, LS1
948 and LS2 for the corresponding extents of S0, S1 and S2 respectively.
950 All locks are in CACHED state. Each LSi sub-lock has a osc_lock slice, where a
951 pointer to the struct ldlm_lock is stored. The ->l_ast_data field of ldlm_lock
952 points back to the sub-lock's osc_lock.
954 The client caches clean and dirty pages for F, some in [a, b] and some outside
955 of it (these latter are necessarily covered by some other locks). Each of these
956 pages is in F's radix tree, and points through cl_page::cp_child to a sub-page
957 which is in radix-tree of one of Si's.
959 Some other client requests a lock that conflicts with LS1. The OST where S1
960 lives, sends a blocking AST to C0.
962 C0's LDLM invokes lock->l_blocking_ast(), which is osc_ldlm_blocking_ast(),
963 which eventually calls acquires a mutex on the sub-lock and calls
964 cl_lock_cancel(sub-lock). cl_lock_cancel() ascends through sub-lock's slices
965 (which are osc_lock and lovsub_lock), calling ->clo_cancel() method at every
966 stripe, that is, calling osc_lock_cancel() (the LOVSUB layer doesn't define
969 osc_lock_cancel() calls cl_lock_page_out() to invalidate all pages cached under
970 this lock after sending dirty ones back to stripe S1's server.
972 To do this, cl_lock_page_out() obtains the sub-lock's object and sweeps through
973 its radix tree from the starting to ending offset of the sub-lock (recall that
974 a sub-lock extent is measured in page offsets within a sub-object). For every
975 page thus found cl_page_unmap() function is called to invalidate it. This
976 function goes through sub-page slices bottom-to-top, then follows ->cp_parent
977 pointer to go to the top-page and repeats the same process. Eventually
978 vvp_page_unmap() is called which unmaps a page (top-page by this time) from the
981 After a page is invalidated, it is prepared for transfer if it is dirty. This
982 step also includes a bottom-to-top scan of the page and top-page slices, and
983 calls to ->cpo_prep() methods at each layer, allowing vvp_page_prep_write() to
984 announce to the VM that the VM page is being written.
986 Once all pages are written, they are removed from radix-trees and destroyed.
987 This completes invalidation of a sub-lock, and osc_lock_cancel() exits. Note
990 - No special cancellation logic for the top-lock is necessary;
992 - Specifically, VVP knows nothing about striping and there is no need to
993 handle the case where only part of the top-lock is cancelled;
995 - There is no need to convert between file and stripe offsets during this
998 - There is no need to keep track of locks protecting the given page.
1004 An IO context (struct cl_io) is a layered object describing the state of an
1005 ongoing IO operation (such as a system call).
1010 There are two classes of IO contexts, represented by cl_io:
1012 - An IO for a specific type of client activity, enumerated by enum cl_io_type:
1014 . CIT_READ: read system call including read(2), readv(2), pread(2),
1016 . CIT_WRITE: write system call;
1017 . CIT_TRUNC: truncate system call;
1018 . CIT_FAULT: page fault handling;
1020 - A `catch-all' CIT_MISC IO type for all other IO activity:
1022 . cancellation of an extent lock,
1023 . VM induced page write-out,
1025 . other miscellaneous stuff.
1027 The difference between CIT_MISC and other IO types is that CIT_MISC IO is
1028 merely a context in which pages are owned and locks are enqueued, whereas
1029 other IO types, in addition to being a context, are also state machines.
1031 6.2. IO State Machine
1032 =====================
1034 The idea behind the cl_io state machine is that initial `work' that has to be
1035 done (e.g., writing a 3MB user buffer into a given file) is done as a sequence
1036 of `iterations', and an iteration is executed as following an idiomatic
1039 - Prepare: determine what work is to be done at this iteration;
1041 - Lock: enqueue and acquire all locks necessary to perform this iteration;
1043 - Start: either perform iteration work synchronously, or post it
1044 asynchronously, or both;
1046 - End: wait for the completion of asynchronous work;
1048 - Unlock: release locks, acquired at the "lock" step;
1050 - Finalize: finalize iteration state.
1052 cl_io is a layered entity and each step above is performed by invoking the
1053 corresponding cl_io_operations method on every layer. As will be explained
1054 below, this is especially important in the `prepare' step, as it allows layers
1055 to cooperate in determining the scope of the current iteration.
1057 For CIT_READ or CIT_WRITE IO, a typical scenario is splitting the original user
1058 buffer into chunks that map completely inside of a single stripe in the target
1059 file, and processing each chunk as a separate iteration. In this case, it is
1060 the LOV layer that (in lov_io_rw_iter_init() function) determines the extent of
1061 the current iteration.
1063 Once the iteration is prepared, the `lock' step acquires all necessary DLM
1064 locks to cover the region of a file that is affected by the current iteration.
1065 The `start' step does the actual processing, which for write means placing
1066 pages from the user buffer into the cache, and for read means fetching pages
1067 from the server, including read-ahead pages (see `immediate transfer' below).
1068 Truncate and page fault are executed in one iteration (currently that is, it's
1069 easy to change truncate implementation to, for instance, truncate each stripe
1070 in a separate iteration, should the need arise).
1075 One important planned generalization of this model is an out of order execution
1078 A motivating example for this is a write of a large user level buffer,
1079 overlapping with multiple stripes. Typically, a busy Lustre client has its
1080 per-OSC caches for the dirty pages nearly full, which means that the write
1081 often has to block, waiting for the cache to drain. Instead of blocking the
1082 whole IO operation, CIT_WRITE might switch to the next stripe and try to do IO
1083 there. Without such a `non-blocking' IO, a slow OST or an unfair network
1084 degrades the performance of the whole cluster.
1086 Another example is a legacy single-threaded application running on a multi-core
1087 client machine, where IO throughput is limited by the single thread copying
1088 data between the user buffer to the kernel pages. Multiple concurrent IO
1089 iterations that can be scheduled independently on the available processors
1090 eliminate this bottleneck by copying the data in parallel.
1092 Obviously, parallel IO is not compatible with the usual `sequential IO'
1093 semantics. For example, POSIX read and write have a very simple failure model,
1094 where some initial (possibly empty) segment of a user buffer is processed
1095 successfully, and none of the remaining bytes were read and written. Parallel
1096 IO can fail in much more complex ways.
1098 For now, only sequential iterations are supported.
1100 6.4. Data-flow: From Stack to IO Slice
1101 ======================================
1103 The parallel IO design outlined above implies that an ongoing IO can be
1104 preempted by other IO and later resumed, all potentially in the same thread.
1105 This means that IO state cannot be kept on a stack, as it is customarily done
1106 in UNIX file system drivers. Instead, the layered cl_io is used to store
1107 information about the current iteration and progress within it. Coincidentally
1108 (almost) this is similar to the way IO requests are used by the Windows driver
1111 A set of common fields in struct cl_io describe the IO and are shared by all
1112 layers. Important properties so described include:
1116 - A file (struct cl_object) against which this IO is executed;
1118 - A position in a file where the read or write is taking place, and a count of
1119 bytes remaining to be processed (for CIT_READ and CIT_WRITE);
1121 - A size to which file is being truncated or expanded (for CIT_TRUNC);
1123 - A list of locks acquired for this IO;
1125 Each layer keeps IO state in its `IO slice', described below, with all slices
1126 chained to the list hanging off of struct cl_io:
1128 - vvp_io, ccc_io: these two slices are used by the top-most layer of the Linux
1129 kernel client. ccc_io is a state common between kernel client and liblustre,
1130 and vvp_io is a state private to the kernel client.
1132 The most important state in ccc_io is an array of struct iovec, describing
1133 user space buffers from or to which IO is taking place. Note that other
1134 layers in the IO stack have no idea that data actually came from user space.
1136 vvp_io contains kernel specific fields, such as VM information describing a
1137 page fault, or the sendfile target.
1139 - lov_io: IO state private for the LOV layer is kept here. The most important IO
1140 state at the LOV layer is an array of sub-IO's. Each sub-IO is a normal
1141 struct cl_io, representing a part of the IO process for a given iteration.
1142 With current sequential iterations, only one sub-IO is active at a time.
1144 - osc_io: this slice stores IO state private to the OSC layer that exists within
1145 each sub-IO created by LOV.
1151 7.1. Immediate vs. Opportunistic Transfers
1152 ==========================================
1154 There are two possible modes of transfer initiation on the client:
1156 - Immediate transfer: this is started when a high level IO wants a page or a
1157 collection of pages to be transferred right away. Examples: read-ahead,
1158 a synchronous read in the case of non-page aligned write, page write-out as
1159 part of an extent lock cancellation, page write-out as a part of memory
1160 cleansing. Immediate transfer can be both cl_req_type::CRT_READ and
1161 cl_req_type::CRT_WRITE;
1163 - Opportunistic transfer (cl_req_type::CRT_WRITE only), that happens when IO
1164 wants to transfer a page to the server some time later, when it can be done
1165 efficiently. Example: pages dirtied by the write(2) path. Pages submitted for
1166 an opportunistic transfer are kept in a "staging area".
1168 In any case, a transfer takes place in the form of a cl_req, which is a
1169 representation for a network RPC.
1171 Pages queued for an opportunistic transfer are placed into a staging area
1172 (represented as a set of per-object and per-device queues at the OSC layer)
1173 until it is decided that an efficient RPC can be composed of them. This
1174 decision is made by "a req-formation engine", currently implemented as part of
1175 the OSC layer. Req-formation depends on many factors: the size of the resulting
1176 RPC, RPC alignment, whether or not multi-object RPCs are supported by the
1177 server, max-RPC-in-flight limitations, size of the staging area, etc. CLIO uses
1178 unmodified RPC formation logic from OSC, so it is not discussed here.
1180 For an immediate transfer the IO submits a cl_page_list which the req-formation
1181 engine slices into cl_req's, possibly adding cached pages to some of the
1184 Whenever a page from cl_page_list is added to a newly constructed req, its
1185 cl_page_operations::cpo_prep() layer methods are called. At that moment, the
1186 page state is atomically changed from cl_page_state::CPS_OWNED to
1187 cl_page_state::CPS_PAGEOUT or cl_page_state::CPS_PAGEIN, cl_page::cp_owner is
1188 zeroed, and cl_page::cp_req is set to the req. cl_page_operations::cpo_prep()
1189 method at a particular layer might return -EALREADY to indicate that it does
1190 not need to submit this page at all. This is possible, for example, if a page
1191 submitted for read became up-to-date in the meantime; and for write, if the
1192 page don't have dirty bit set. See cl_io_submit_rw() for details.
1194 Whenever a staged page is added to a newly constructed req, its
1195 cl_page_operations::cpo_make_ready() layer methods are called. At that moment,
1196 the page state is atomically changed from cl_page_state::CPS_CACHED to
1197 cl_page_state::CPS_PAGEOUT, and cl_page::cp_req is set to req. The
1198 cl_page_operations::cpo_make_ready() method at a particular layer might return
1199 -EAGAIN to indicate that this page is not currently eligible for the transfer.
1201 The RPC engine guarantees that once the ->cpo_prep() or ->cpo_make_ready()
1202 method has been called, the page completion routine (->cpo_completion() layer
1203 method) will eventually be called (either as a result of successful page
1204 transfer completion, or due to timeout).
1206 To summarize, there are two main entry points into transfer sub-system:
1208 - cl_io_submit_rw(): submits a list of pages for immediate transfer;
1210 - cl_page_cache_add(): places a page into staging area for future
1211 opportunistic transfer.
1216 To submit a group of pages for immediate transfer struct cl_2queue is used. It
1217 contains two page lists: qin (input queue) and qout (output queue). Pages are
1218 linked into these queues by cl_page::cp_batch list heads. Qin is populated with
1219 the pages to be submitted to the transfer, and pages that were actually
1220 submitted are placed onto qout. Not all pages from qin might end up on qout due
1223 - ->cpo_prep() methods deciding that page should not be transferred, or
1225 - unrecoverable submission error.
1227 Pages not moved to qout remain on qin. It is up to the transfer submitter to
1228 decide when to remove pages from qin and qout. Remaining pages on qin are
1229 usually removed from this list right after (partially unsuccessful) transfer
1230 submission. Pages are usually left on qout until transfer completion. This way
1231 the caller can determine when all pages from the list were transferred.
1233 The association between a page and an immediate transfer queue is protected by
1234 cl_page::cl_mutex. This mutex is acquired when a cl_page is added in a
1235 cl_page_list and released when a page is removed from the list.
1237 When an RPC is formed, all of its constituent pages are linked together through
1238 cl_page::cp_flight list hanging off of cl_req::crq_pages. Pages are removed
1239 from this list just before the transfer completion method is invoked. No
1240 special lock protects this list, as pages in transfer are under a VM lock.
1242 7.3. Transfer States: Prepare, Completion
1243 =========================================
1245 The transfer (cl_req) state machine is trivial, and it is not explicitly coded.
1246 A newly created transfer is in the "prepare" state while pages are collected.
1247 When all pages are gathered, the transfer enters the "in-flight" state where it
1248 remains until it reaches the "completion" state where page completion handlers
1251 The per-layer ->cro_prep() transfer method is called when transfer preparation
1252 is completed and transfer is about to enter the in-flight state. Similarly, the
1253 per-layer ->cro_completion() method is called when the transfer completes
1254 before per-page completion methods are called.
1256 Additionally, before moving a transfer out of the prepare state, the RPC engine
1257 calls the cl_req_attr_set() function. This function invokes ->cro_attr_set()
1258 methods on every layer to fill in RPC header that server uses to determine
1259 where to get or put data. This replaces the old ->ap_{update,fill}_obdo()
1262 Further, cl_req's are not reference counted and access to them is not
1263 synchronized. This is because they are accessed only by the RPC engine in OSC
1264 which fully controls RPC life-time, and it uses an internal OSC lock
1265 (client_obd::cl_loi_list_lock spin-lock) for serialization.
1267 7.4. Page Completion Handlers, Synchronous Transfer
1268 ===================================================
1270 When a transfer completes, cl_req completion methods are called on every layer.
1271 Then, for every transfer page, per-layer page completion methods
1272 ->cpo_completion() are invoked. The page is still under the VM lock at this
1273 moment. Completion methods are called bottom-to-top and it is responsibility
1274 of the last of them (i.e., the completion method of the top-most layer---VVP)
1275 to release the VM lock.
1277 Both immediate and opportunistic transfers are asynchronous in the sense that
1278 control can return to the caller before the transfer completes. CLIO doesn't
1279 provide a synchronous transfer interface at all and it is up to a particular
1280 caller to implement it if necessary. The simplest way to wait for the transfer
1281 completion is wait on a page VM lock. This approach is used implicitly by the
1282 Linux kernel. There is a case, though, where one wants to do transfer
1283 completely synchronously without releasing the page VM lock: when
1284 ->prepare_write() method determines that a write goes from a non page-aligned
1285 buffer into a not up-to-date page, a portion of a page has to be fetched from
1286 the server. The VM page lock cannot be used to synchronize transfer completion
1287 in this case, because it is used to mark the page as owned by IO. To handle
1288 this, VVP attaches struct cl_sync_io to struct vvp_page. cl_sync_io contains a
1289 number of pages still in IO and a synchronization primitive (struct completion)
1290 which is signalled when transfer of the last page completes. The VVP page
1291 completion handler (vvp_page_completion_common()) checks for attached
1292 cl_sync_io and if it is there, decreases the number of in-flight pages and
1293 signals completion when that number drops to 0. A similar mechanism is used for
1300 8.1. Motivation, Server Environment Usage
1301 =========================================
1303 lu_env and related data-types (struct lu_context and struct lu_context_key)
1304 together implement a memory pre-allocation interface that Lustre uses to
1305 decrease stack consumption without resorting to fully dynamic allocation.
1307 Stack space is severely limited in the Linux kernel. Lustre traditionally
1308 allocated a lot of automatic variables, resulting in spurious stack overflows
1309 that are hard to trigger (they usually need a certain combination of driver
1310 calls and interrupts to happen, making them extremely difficult to reproduce)
1311 and debug (as stack overflow can easily result in corruption of thread-related
1312 data-structures in the kernel memory, confusing the debugger).
1314 The simplest way to handle this is to replace automatic variables with calls
1315 to the generic memory allocator, but
1317 - The generic allocator has scalability problems, and
1319 - Additional code to free allocated memory is needed.
1321 The lu_env interface was originally introduced in the MDS rewrite for Lustre
1322 2.0 and matches server-side threading model very well. Roughly speaking,
1323 lu_context represents a context in which computation is executed and
1324 lu_context_key is a description of per-context data. In the simplest case
1325 lu_context corresponds to a server thread; then lu_context_key is effectively a
1326 thread-local storage (TLS). For a similar idea see the user-level pthreads
1327 interface pthread_key_create().
1329 More formally, lu_context_key defines a constructor-destructor pair and a tags
1330 bit-mask. When lu_context is initialized (with a given tag bit-mask), a global
1331 array of all registered lu_context_keys is scanned, constructors for all keys
1332 with matching tags are invoked and their return values are stored in
1335 Once lu_context has been initialized, a value of any key allocated for this
1336 context can be retrieved very efficiently by indexing in the per-context
1337 array. lu_context_key_get() function is used for this.
1339 When context is finalized, destructors are called for all keys allocated in
1342 The typical server usage is to have a lu_context for every server thread,
1343 initialized when the thread is started. To reduce stack consumption by the
1344 code running in this thread, a lu_context_key is registered that allocates in
1345 its constructor a struct containing as fields values otherwise allocated on
1346 the stack. See {mdt,osd,cmm,mdd}_thread_info for examples. Instead of doing
1348 int function(args) {
1349 /* structure "bar" in module "foo" */
1353 the code roughly does
1355 struct foo_thread_info {
1356 struct foo_bar fti_bar;
1360 int function(const struct lu_env *env, args) {
1361 struct foo_bar *bar;
1363 bar = &lu_context_key_get(&env->le_ctx, &foo_thread_key)->fti_
1367 struct lu_env contains 2 contexts:
1369 - le_ctx: this context is embedded in lu_env. By convention, this context is
1370 used _only_ to avoid allocations on the stack, and it should never be used to
1371 pass parameters between functions or layers. The reason for this restriction
1372 is that using contexts for implicit state sharing leads to a code that is
1373 difficult to understand and modify.
1375 - le_ses: this is a pointer to a context shared by all threads handling given
1376 RPC. Context itself is embedded into struct ptlrpc_request. Currently a
1377 request is always processed by a single thread, but this might change in the
1378 future in a design where a small pool of threads processes RPCs
1381 Additionally, state kept in env->le_ses context is shared by multiple layers.
1382 For example, remote user credentials are stored there.
1384 8.2. Client Environment Usage
1385 =============================
1387 On a client there is a lu_env associated with every thread executing Lustre
1388 code. Again, it contains &env->le_ctx context used to reduce stack consumption.
1389 env->le_ses is used to share state between all threads handling a given IO.
1390 Again, currently an IO is processed by a single thread. env->le_ses is used to
1391 efficiently allocate cl_io slices ({vvp,lov,osc}_io).
1393 There are three important differences with lu_env usage on the server:
1395 - While on the server there is a fixed pool of threads, any client thread can
1396 execute Lustre code. This makes it impractical to pre-allocate and
1397 pre-initialize lu_context for every thread. Instead, contexts are constructed
1398 on demand and after use returned into a global cache that amortizes creation
1401 - Client call-chains frequentyly cross Lustre-VFS and Lustre-VM boundaries.
1402 This means that just passing lu_env as a first parameter to every Lustre
1403 function and method is not enough. To work around this problem, a pointer to
1404 lu_env is stored in a field in the kernel data-structure associated with the
1405 current thread (task_struct::journal_info), from where it is recovered when
1406 Lustre code is re-entered from VFS or VM;
1408 - Sometimes client code is re-entered in a fashion that precludes re-use of the
1409 higher level lu _env. For example, when a read or write incurs a page fault
1410 in the user space buffer memory-mapped from a Lustre file, page fault
1411 handling is a separate IO, independent of the already ongoing system call.
1412 The Lustre page fault handler allocates a new lu_env (by calling
1413 lu_env_get_nested()) in which the nested IO is going on. A similar situation
1414 occurs when client DLM lock LRU shrinking code is invoked in the context of a
1417 8.3. Sub-environments
1418 =====================
1420 As described above, lu_env (specifically, lu_env->le_ses) is used on a client
1421 to allocate per-IO state, including foo_io data on every layer. This leads to a
1422 complication at the LOV layer, which maintains multiple sub-IOs. As layers
1423 below LOV allocate their IO slices in lu_env->le_ses, LOV has to allocate an
1424 lu_env for every sub-IO and to carefully juggle them when invoking lower layer
1425 methods. The case of a single IO is optimized by re-using the top-environment.
1434 Lookup ends up calling ll_update_inode() to setup a new inode with a given
1435 meta-data descriptor (obtained from the meta-data path). cl_inode_init() calls
1436 cl_object_find() eventually calling lu_object_find_try() that either finds a
1437 cl_object in the cache or allocates a new one, calling
1438 lu_device_operations::ldo_object_{alloc,init}() methods on every layer top to
1439 bottom. Every layer allocates its private data structure ({vvp,lov}_object) and
1440 links it into an object header (cl_object_header) by calling lu_object_add().
1441 At the VVP layer, vvp_object contains a pointer to the inode. The LOV layer
1442 allocates a lov_object containing an array of pointers to sub-objects that are
1443 found in the cache or allocated by calling cl_object_find (recursively). These
1444 sub-objects have LOVSUB and OSC layer data.
1446 A top-object and its sub-objects are inserted into a global FID-based hash
1447 table and a global LRU list.
1449 9.2. First IO to a File
1450 =======================
1452 After an object is instantiated as described in the previous use case, the
1453 first IO call against this object has to create DLM locks. The following
1454 operations re-use cached locks (see below).
1456 A read call starts at ll_file_readv() which eventually calls
1457 ll_file_io_generic(). This function calls cl_io_init() to initialize an IO
1458 context, which calls the cl_object_operations::coo_io_init() method on every
1459 layer. As in the case of object instantiation, these methods allocate
1460 layer-private IO state ({vvp,lov}_io) and add it to the list hanging off of the
1461 IO context header cl_io by calling cl_io_add(). At the VVP layer, vvp_io_init()
1462 handles special cases (like count == 0), updates statistic counters, and in the
1463 case of write it takes a per-inode semaphore to avoid possible deadlock.
1465 At the LOV layer, lov_io_init_raid0() allocates a struct lov_io and stores in
1466 it the original IO parameters (starting offset and byte count). This is needed
1467 because LOV is going to modify these parameters. Sub-IOs are not allocated at
1468 this point---they are lazily instantiated later.
1470 Once the top-IO has been initialized, ll_file_io_generic() enters the main IO
1471 loop cl_io_loop() that drives IO iterations, going through
1473 - cl_io_iter_init() calling cl_io_operations::cio_iter_init() top-to-bottom
1474 - cl_io_lock() calling cl_io_operations::cio_lock() top-to-bottom
1475 - cl_io_start() calling cl_io_operations::cio_start() top-to-bottom
1476 - cl_io_end() calling cl_io_operations::cio_end() bottom-to-top
1477 - cl_io_unlock() calling cl_io_operations::cio_unlock() bottom-to-top
1478 - cl_io_iter_fini() calling cl_io_operations::cio_iter_fini() bottom-to-top
1479 - cl_io_rw_advance() calling cl_io_operations::cio_advance() bottom-to-top
1481 repeatedly until cl_io::ci_continue remains 0 after an iteration. These "IO
1482 iterations" move an IO context through consecutive states (see enum
1483 cl_io_state). ->cio_iter_init() decides at each layer what part of the
1484 remaining IO is to be done during current iteration. Currently,
1485 lov_io_rw_iter_init() is the only non-trivial implementation of this method. It
1488 - Except for the cases of truncate and O_APPEND write, it shrinks the IO extent
1489 recorded in the top-IO (starting offset and bytes count) so that this extent
1490 is fully contained within a single stripe. This avoids "cascading evictions";
1492 - It allocates sub-IOs for all stripes intersecting with the resulting IO range
1493 (which, in case of non-append write or read means creating single sub-io) by
1494 calling cl_io_init() that (as above) creates a cl_io context with lovsub_io
1495 and osc_io layers. The initialized cl_io is primed from the top-IO
1496 (lov_io_sub_inherit()) and cl_io_iter_init() is called against it;
1498 - Finally all sub-ios for the current iteration are linked together into a
1499 lov_io::lis_active list.
1501 Now we have a top-IO and its sub-IO in CIS_IT_STARTED state. cl_io_lock()
1502 collects locks on all layers without actually enqueuing them: vvp_io_rw_lock()
1503 requests a lock on the IO extent (possibly shrunk by LOV, see above) and
1504 optionally on extents of Lustre files that happen to be memory-mapped onto the
1505 user-level buffer used for this IO. In the future layers like SNS might request
1506 additional locks, e.g., to protect parity blocks.
1508 Locks requested by ->cio_lock() methods are added to the cl_lockset embedded
1509 into top cl_io. The lockset contains 3 lock queues: "todo", "current" and
1510 "done". Locks are initially placed in the todo queue. Once locks from all
1511 layers have been collected, they are sorted to avoid deadlocks
1512 (cl_io_locks_sort()) and them enqueued by cl_lockset_lock(). The latter can
1513 enqueue multiple locks concurrently if the enqueuing mode guarantees this is
1514 safe (e.g., lock is a try-lock). Locks being enqueued are in the "current"
1515 queue, from where they are moved into "done" queue when the lock is granted.
1517 At this stage we have top- and sub-IO in the CIS_LOCKED state with all needed
1518 locks held. cl_io_start() moves cl_io into CIS_IO_GOING mode and calls
1519 ->cio_start() method. In the VVP layer this method invokes some version of
1520 generic_file_{read,write}() function.
1522 In the case of read, generic_file_read() calls for every non-up-to-date page
1523 the a_ops->readpage() method that eventually (after obtaining cl_page
1524 corresponding to the VM page supplied to it) calls cl_io_read_page() which in
1525 turn calls cl_io_operations::cio_read_page().
1527 vvp_io_read_page() populates a queue by a target page and pages from read-ahead
1528 window. The resulting queue is then submitted for immediate transfer by calling
1529 cl_io_submit_rw() which ends up calling osc_io_submit_page() for every
1530 not-up-to-date page in the queue.
1532 ->readpage() returns at this point, and the VM waits on a VM page lock, which
1533 is released by the transfer completion handler before copying page data to the
1536 In the case of write, generic_file_write() calls the a_ops->prepare_write() and
1537 a_ops->commit_write() address space methods that end up calling
1538 cl_io_prepare_write() and cl_io_commit_write() respectively. These functions
1539 follow the normal Linux protocol for write, including a possible synchronous
1540 read of a non-overwritten part of a page (vvp_page_sync_io() call in
1541 vvp_io_prepare_partial()). In the normal case it ends up placing the dirtied
1542 page into the staging area (cl_page_cache_add() call in vvp_io_commit_write()).
1543 If the staging area is full already, cl_page_cache_add() fails with -EDQUOT and
1544 the page is transferred immediately by calling vvp_page_sync_io().
1549 Subsequent IO calls will, most likely, find suitable locks already cached on
1550 the client. This happens because the server tries to grant as large a lock as
1551 possible, to reduce future enqueue RPC traffic for a given file from a given
1552 client. Cached locks are kept (in no particular order) on a
1553 cl_object_header::coh_locks list. When, in the cl_io_lock() step, a layer
1554 requests a lock, this list is scanned for a matching lock. If a found lock is
1555 in the HELD or CACHED state it can be re-used immediately by simply calling
1556 cl_lock_use() method, which eventually calls ldlm_lock_addref_try() to protect
1557 the underlying DLM lock from a concurrent cancellation while IO is going on. If
1558 a lock in another (NEW, QUEUING or ENQUEUED) state is found, it is enqueued as
1561 9.4. Lock-less and No-cache IO
1562 ==============================
1564 IO context has a "locking mode" selected from MAYBE, NEVER or MANDATORY set
1565 (enum cl_io_lock_dmd), that specifies what degree of distributed cache
1566 coherency is assumed by this IO. MANDATORY mode requires all caches accessed by
1567 this IO to be protected by distributed locks. In NEVER mode no distributed
1568 coherency is needed at the expense of not caching the data. This mode is
1569 required for the cases where client can not or will not participate in the
1570 cache coherency protocol (e.g., a liblustre client that cannot respond to the
1571 lock blocking call-backs while in the compute phase). In MAYBE mode some of the
1572 caches involved in this IO are used and are globally coherent, and some other
1573 caches are bypassed.
1575 O_APPEND writes and truncates are always executed in MANDATORY mode. All other
1576 calls are executed in NEVER mode by liblustre (see below) and in MAYBE mode by
1577 a normal Linux client.
1579 In MAYBE mode every OSC individually decides whether to use DLM. An OST might
1580 return -EUSERS to an enqueue RPC indicating that the stripe in question is
1581 contended and that the client should switch to the lockless IO mode. If this
1582 happens, OSC, instead of using ldlm_lock, creates a special "lockless OSC lock"
1583 that is not backed up by a DLM lock. This lock conflicts with any other lock in
1584 its range and self-cancels when its last user is removed. As a result, when IO
1585 proceeds to the stripe that is in lockless mode, all conflicting extent locks
1586 are cancelled, purging the cache. When IO against this stripe ends, the lock is
1587 cancelled, sending dirty pages (just placed in the cache by IO) back to the
1588 server and invalidating the cache again. "Lockless locks" allow lockless and
1589 no-cache IO mode to be implemented by the same code paths as cached IO.