1 ******************************************************
2 * Overview of the Lustre Client I/O (CLIO) subsystem *
3 ******************************************************
7 Nikita Danilov <Nikita_Danilov@xyratex.com>
16 ii. Top-{object,lock,page}, Sub-{object,lock,page}
18 1.3. Main Differences with the Pre-CLIO Client Code
19 1.4. Layered objects, Slices, Headers
27 2.2. LOV, LOVSUB (layouts)
30 3.1. FID, Hashing, Caching, LRU
31 3.2. Top-object, Sub-object
32 3.3. Object Operations
33 3.4. Object Attributes
38 4.3. Page Transfer Locking
40 4.5. Page Initialization
43 5.2. cl_lock and LDLM Lock
44 5.3. Use Case: Lock Invalidation
49 6.4. Data-flow: From Stack to IO Slice
51 7.1. Immediate vs. Opportunistic Transfers
53 7.3. Transfer States: Prepare, Completion
54 7.4. Page Completion Handlers, Synchronous Transfer
55 8. LU Environment (lu_env)
56 8.1. Motivation, Server Environment Usage
61 9.2. First IO to a File
64 9.3. Lock-less and No-cache IO
73 CLIO is a re-write of interfaces between layers in the client data-path (read,
74 write, truncate). Its goals are:
76 - Reduce the number of bugs in the IO path;
78 - Introduce more logical layer interfaces instead of current all-in-one OBD
81 - Define clear and precise semantics for the interface entry points;
83 - Simplify the structure of the client code.
85 - Support upcoming features:
89 . parallel non-blocking IO,
92 - Reduce stack consumption.
96 - No meta-data changes;
97 - No support for 2.4 kernels;
99 - No changes to recovery;
100 - The same layers with mostly the same functionality;
101 - As few changes to the core logic of each Lustre data-stack layer as possible
102 (e.g., no changes to the read-ahead or OSC RPC logic).
107 Any discussion of client functionality has to talk about `read' and `write'
108 system calls on the one hand and about `read' and `write' requests to the
109 server on the other hand. To avoid confusion, the former high level operations
110 are called `IO', while the latter are called `transfer'.
112 Many concepts apply uniformly to pages, locks, files, and IO contexts, such as
113 reference counting, caching, etc. To describe such situations, a common term is
114 needed to denote things from any of the above classes. `Object' would be a
115 natural choice, but files and especially stripes are already called objects, so
116 the term `entity' is used instead.
118 Due to the striping it's often a case that some entity is composed of multiple
119 entities of the same kind: a file is composed of stripe objects, a logical lock
120 on a file is composed of stripe locks on the file's stripes, etc. In these
121 cases we shall talk about top-object, top-lock, top-page, top-IO, etc. being
122 constructed from sub-objects, sub-locks, sub-pages, sub-io's respectively.
124 The topmost module in the Linux client, is traditionally known as `llite'. The
125 corresponding CLIO layer is called `VVP' (VFS, VM, POSIX) to reflect its
126 functional responsibilities.
128 1.3. Main Differences with the Pre-CLIO Client Code
129 ===================================================
131 - Locks on files (as opposed to locks on stripes) are first-class objects (i.e.
134 - Sub-objects (stripes) are first-class objects;
136 - Stripe-related logic is moved out of llite (almost);
138 - IO control flow is different:
140 . Pre-CLIO: llite implements control flow, calling underlying OBD
141 methods as necessary;
143 . CLIO: generic code (cl_io_loop()) controls IO logic calling all
144 layers, including VVP.
146 In other words, VVP (or any other top-layer) instead of calling some
147 pre-existing `lustre interface', also implements parts of this interface.
149 - The lu_env allocator from MDT is used on the client.
154 CLIO continues the layered object approach that was found to be useful for the
155 MDS rewrite in Lustre 2.0. In this approach, instances of key entity types
156 (files, pages, locks, etc.) are represented as a header, containing attributes
157 shared by all layers. Each header contains a linked list of per-layer `slices',
158 each of which contains a pointer to a vector of function pointers. Generic
159 operations on layered objects are implemented by going through the list of
160 slices and invoking the corresponding function from the operation vector at
161 every layer. In this way generic object behavior is delegated to the layers.
163 For example, a page entity is represented by struct cl_page, from which hangs
164 off a list of cl_page_slice structures, one for each layer in the stack.
165 cl_page_slice contains a pointer to struct cl_page_operations.
166 cl_page_operations has the field
168 void (*cpo_completion)(const struct lu_env *env,
169 const struct cl_page_slice *slice, int ioret);
171 When transfer of a page is finished, ->cpo_completion() methods are called in a
172 particular order (bottom to top in this case).
174 Allocation of slices is done during instance creation. If a layer needs some
175 private state for an object, it embeds the slice into its own data structure.
176 For example, the OSC layer defines
179 struct cl_lock_slice ols_cl;
180 struct ldlm_lock *ols_lock;
184 When an operation from cl_lock_operations is called, it is given a pointer to
185 struct cl_lock_slice, and the layer casts it to its private structure (for
186 example, struct osc_lock) to access per-layer state.
188 The following types of layered objects exist in CLIO:
190 - File system objects (files and stripes): struct cl_object_header, slices
191 are of type struct cl_object;
193 - Cached pages with data: struct cl_page, slices are of type
196 - Extent locks: struct cl_lock, slices are of type cl_lock_slice;
198 - IO content: struct cl_io, slices are of type cl_io_slice;
203 Entities with different sequences of slices can co-exist. A typical example of
204 this is a local vs. remote object on the MDS. A local object, based on some
205 file in the local file system has MDT, MDD, LOD and OSD as its layers, whereas
206 a remote object (representing an object local to some other MDT) has MDT, MDD,
209 When the client is being mounted, its device stack is configured according to
210 llog configuration records. The typical configuration is
220 .... osc_device's ....
222 In this tree every node knows its descendants. When a new file (inode) is
223 created, every layer, starting from the top, creates a slice with a state and
224 an operation vector for this layer, appends this slice to the tail of a list
225 anchored at the object header, and then calls the corresponding lower layer
226 device to do the same. That is, the file object structure is determined by the
227 configuration of devices to which this file belongs.
229 Pages and locks, in turn, belong to the file objects, and when a new page is
230 created for a given object, slices of this object are iterated through and
231 every slice is asked to initialize a new page, which includes (usually)
232 allocation of a new page slice and its insertion into a list of page slices.
233 Locks and IO context instantiation is handled similarly.
238 All layered objects except locks, IO contexts and transfer requests (which
239 leaves file objects, pages) are reference counted and cached. They have a
240 uniform caching mechanism:
242 - Objects are kept in some sort of an index (global FID hash for file objects,
243 per-file radix tree for pages, and per-file list for locks);
245 - A reference for an object can be acquired by cl_{object,page}_find() functions
246 that search the index, and if object is not there, create a new one
247 and insert it into the index;
249 - A reference is released by cl_{object,page}_put() functions. When the last
250 reference is released, the object is returned to the cache (still in the
251 index), except when the user explicitly set `do not cache' flag for this
252 object. In the latter case the object is destroyed immediately.
254 Locks(cl_lock) are owned by individual IO threads. cl_lock is a container of
255 lock requirements for underlying DLM locks. cl_lock is allocated by
256 cl_lock_request() and destroyed by cl_lock_cancel(). DLM locks are cacheable
257 and can be reused by cl_locks.
259 IO contexts are owned by a thread (or, potentially a group of threads) doing
260 IO, and need neither reference counting nor indexing. Similarly, transfer
261 requests are owned by an OSC device, and their lifetime is from RPC creation
262 until completion notification.
267 All types of layered objects contain a state-machine, although for the transfer
268 requests this machine is trivial (CREATED -> PREPARED -> INFLIGHT ->
269 COMPLETED), and for the file objects it is very simple. Page, lock, and IO
270 state machines are described in more detail below.
272 As a generic rule, state machine transitions are made under some kind of lock:
273 VM lock for a page, and LU site spin-lock for an object. After some event that
274 might cause a state transition, such lock is taken, and the object state is
275 analysed to check whether transition is possible. If it is, the state machine
276 is advanced to the new state and the lock is released. IO state transitions do
277 not require concurrency control.
282 State machine and reference counting interact during object destruction. In
283 addition to temporary pointers to an entity (that are counted in its reference
284 counter), an entity is reachable through
286 - Indexing structures described above
288 - Pointers internal to some layer of this entity. For example, a page is
289 reachable through a pointer from VM page, and sub-{object, lock, page} might
290 be reachable through a pointer from its top-entity.
292 Entity destruction happens in three phases:
294 - First, a decision is made to destroy an entity, when, for example, a page is
295 truncated from a file, or an inode is destroyed. At this point the `do not
296 cache' bit is set in the entity header, and all ways to reach entity from
297 internal pointers are severed.
299 cl_{page,object}_get() functions never return an entity with the `do not
300 cache' bit set, so from this moment no new internal pointers can be
303 - Pointers `drain' for some time as existing references are released. In
304 this phase the entity is reachable through
306 . temporary pointers, counted in its reference counter, and
307 . possibly a pointer in the indexing structure.
309 - When the last reference is released, the entity can be safely freed (after
310 possibly removing it from the index).
312 See lu_object_put(), cl_page_put().
317 The CLIO code resides in the following files:
319 {llite,lov,osc}/*_{dev,object,lock,page,io}.c
323 All global CLIO data-types are defined in include/cl_object.h header which
324 contains detailed documentation. Generic clio code is in
325 obdclass/cl_{object,page,lock,io}.c
327 An implementation of CLIO interfaces for a layer foo is located in
328 foo/foo_{dev,object,page,lock,io}.c files.
330 Definitions of data-structures shared within a layer are in
331 foo/foo_cl_internal.h.
337 This section briefly outlines responsibility of every layer in the stack. More
338 detailed description of functionality is in the following sections on objects,
341 2.1. VVP, Echo-client
342 ==========================
344 There are currently 2 options for the top-most Lustre layer:
346 - VVP: linux kernel client,
347 - echo-client: special client used by the Lustre testing sub-system.
349 Other possibilities are:
351 - Client ports to other operating systems (OSX, Windows, Solaris),
352 - pNFS and NFS exports.
354 The responsibilities of the top-most layer include:
356 - Definition of the entry points through which Lustre is accessed by the
358 - Interaction with the hosting VM/MM system;
359 - Interaction with the hosting VFS or equivalent;
360 - Implementation of the desired semantics of top of Lustre (e.g. POSIX
363 Let's look at VVP in more detail. First, VVP implements VFS entry points
364 required by the Linux kernel interface: ll_file_{read,write,sendfile}(). Then,
365 VVP implements VM entry points: ll_{write,invalidate,release}page().
367 For file objects, VVP slice (vvp_object) contains a pointer to an inode.
369 For pages, the VVP slice (vvp_page) contains a pointer to the VM page
370 (struct page), a `defer up to date' flag to track read-ahead hits (similar to
371 the pre-CLIO client), and fields necessary for synchronous transfer (see
372 below). VVP is responsible for implementation of the interaction between
373 client page (cl_page) and the VM.
375 There is no special VVP private state for locks.
377 For IO, VVP implements
379 - Mapping from Linux specific entry points (readv, writev, sendfile, etc.)
384 - POSIX features like short reads, O_APPEND atomicity, etc.
386 - Read-ahead (this is arguably not the best layer in which to implement
387 read-ahead, as the existing read-ahead algorithm is network-aware).
392 The LOV layer implements RAID-0 striping. It maps top-entities (file objects,
393 locks, pages, IOs) to one or more sub-entities. LOVSUB is a companion layer
394 that does the reverse mapping.
399 The OSC layer deals with networking stuff:
401 - It decides when an efficient RPC can be formed from cached data;
403 - It calls LNET to initiate a transfer and to get notification of completion;
405 - It calls LDLM to implement distributed cache coherency, and to get
406 notifications of lock cancellation requests;
412 3.1. FID, Hashing, Caching, LRU
413 ===============================
415 Files and stripes are collectively known as (file system) `objects'. The CLIO
416 client reuses support for layered objects from the MDT stack. Both client and
417 MDT objects are based on struct lu_object type, representing a slice of a file
418 system object. lu_object's for a given object are linked through the
419 ->lo_linkage field into a list hanging off field ->loh_layers of struct
420 lu_object_header, that represents a whole layered object.
422 lu_object and lu_object_header provide functionality common between a client
425 - An object is uniquely identified by a FID; all objects are kept in a hash
426 table indexed by a FID;
428 - Objects are reference counted. When the last reference to an object is
429 released it is returned back into the cache, unless it has been explicitly
430 marked for deletion, in which case it is immediately destroyed;
432 - Objects in the cache are kept in a LRU list that is scanned to keep cache
435 On the MDT, lu_object is wrapped into struct md_object where additional state
436 that all server-side objects have is stored. Similarly, on a client, lu_object
437 and lu_object_header are embedded into struct cl_object and struct
438 cl_object_header where additional client state is stored.
440 3.2. Top-object, Sub-object
441 ===========================
443 An important distinction from the server side, where md_object and dt_object
444 are used, is that cl_object "fans out" at the LOV level: depending on the file
445 layout, a single file is represented as a set of "sub-objects" (stripes). At
446 the implementation level, struct lov_object contains an array of cl_objects.
447 Each sub-object is a full-fledged cl_object, having its FID and living in the
448 LRU and hash table. Each sub-object has its own radix tree of pages, and its
451 This leads to the next important difference with the server side: on the
452 client, it's quite usual to have objects with the different sequence of layers.
453 For example, typical top-object is composed of the following layers:
458 whereas its sub-objects are composed of layers:
463 Here "LOVSUB" is a mostly dummy layer, whose purpose is to keep track of the
464 object-subobject relationship:
466 cl_object_header-+--->object LRU list
477 cl_object_header | . . .
480 . cl_object_header-+--->object LRU list
488 Sub-objects are not cached independently: when top-object is about to be
489 discarded from the memory, all its sub-objects are torn-down and destroyed too.
491 3.3. Object Operations
492 ======================
494 In addition to the lu_object_operations vector, each cl_object slice has
495 cl_object_operations. lu_object_operations deals with object creation and
496 destruction of objects. Client specific cl_object_operations fall into two
499 - Creation of dependent entities: these are ->coo_{page,lock,io}_init()
500 methods called at every layer when a new page, lock or IO context are being
503 - Object attributes: ->coo_attr_{get,set}() methods that are called to get or
504 set common client object attributes (struct cl_attr): size, [mac]times, etc.
506 3.4. Object Attributes
507 ======================
509 A cl_object has a set of attributes defined by struct cl_attr. Attributes
510 include object size, object known-minimum-size (KMS), access, change and
511 modification times and ownership identifiers. Description of KMS is beyond the
512 scope of this document, refer to the (non-)existent Lustre documentation on the
515 Both top-objects and sub-objects have attributes. Consistency of the attributes
516 is protected by a lock on the top-object, accessible through
517 cl_object_attr_{un,}lock() calls. This allows a sub-object and its top-object
518 attributes to be changed atomically.
520 Attributes are accessible through cl_object_attr_{g,s}et() functions that call
521 per-layer ->coo_attr_{s,g}et() object methods. Top-object attributes are
522 calculated from the sub-object ones by lov_attr_get() that optimizes for the
523 case when none of sub-object attributes have changed since the last call to
526 As a further potential optimization, one can recalculate top-object attributes
527 at the moment when any sub-object attribute is changed. This would allow to
528 avoid collecting cumulative attributes over all sub-objects. To implement this
529 optimization _all_ changes of sub-object attributes must go through
530 cl_object_attr_set().
535 Layout of an object decides how data of the file are placed onto OSTs. Object
536 layout can be changed and if that happens, clients will have to reconfigure
537 the object layout by calling cl_conf_set() before it can do anything to this
540 In order to be notified for the layout change, the client has to cache an
541 inodebits lock: MDS_INODELOCK_LAYOUT in the memory. To change an object's
542 layout, The MDT holds the EX mode of layout lock therefore all clients having
543 this object cached will be notified.
545 Reconfiguring layout of objects is expensive because it has to clean up page
546 cache and rebuild the sub-objects. There is a field lsm_layout_gen in
547 lov_stripe_md and it must be increased whenever the layout is changed for an
548 object. Therefore, if the revocation of layout lock is due to flase sharing of
549 ibits lock, the lsm_layout_gen won't be changed and the client can reuse the
550 page cache and subobjects.
552 CLIO uses ll_refresh_layout() to make sure that valid layout is fetched. This
553 function must be called before any IO can be started to an object.
559 A cl_page represents a portion of a file, cached in the memory. All pages of
560 the given file are of the same size, and are kept in the radix tree of kernel
563 A cl_page is associated with a VM page of the hosting environment (struct page
564 in the Linux kernel, for example), cfs_page_t. It is assumed that this
565 association is implemented by one of cl_page layers (top layer in the current
568 - intercepts per-VM-page call-backs made by the host environment (e.g., memory
571 - translates state (page flag bits) and locking between lustre and the host
574 The association between cl_page and cfs_page_t is immutable and established
575 when cl_page is created. It is possible to imagine a setup where different
576 pages get their backing VM buffers from different sources. For example, in the
577 case if pNFS export, some pages might be backed by local DMU buffers, while
578 others (representing data in remote stripes), by normal VM pages.
580 Unlike any other entities in CLIO, there is no subpage for cl_page.
585 Pages within a given object are linearly ordered. The page index is stored in
586 the ->cpl_index field in cl_page_slice. In a typical Lustre setup, a top-object
587 has an array of sub-objects, and every page in a top-object corresponds to a
588 page slice in one of its sub-objects.
590 There is a radix tree for pages on the OSC layer. When a LDLM lock is being
591 cancelled, the OSC will look up the radix tree so that the pages belong to
592 the corresponding lock extent can be destroyed.
597 A cl_page can be "owned" by a particular cl_io (see below), guaranteeing this
598 IO an exclusive access to this page with regard to other IO attempts and
599 various events changing page state (such as transfer completion, or eviction of
600 the page from memory). Note, that in general a cl_io cannot be identified with
601 a particular thread, and page ownership is not exactly equal to the current
602 thread holding a lock on the page. The layer implementing the association
603 between cl_page and cfs_page_t has to implement ownership on top of available
604 synchronization mechanisms.
606 While the Lustre client maintains the notion of page ownership by IO, the
607 hosting MM/VM usually has its own page concurrency control mechanisms. For
608 example, in Linux, page access is synchronized by the per-page PG_locked
609 bit-lock, and generic kernel code (generic_file_*()) takes care to acquire and
610 release such locks as necessary around the calls to the file system methods
611 (->readpage(), ->write_begin(), ->write_end(), etc.). This leads to the
612 situation when there are two different ways to own a page in the client:
614 - Client code explicitly and voluntary owns the page (cl_page_own());
616 - The hosting VM locks a page and then calls the client, which has to "assume"
617 ownership from the VM (cl_page_assume()).
619 Dual methods to release ownership are cl_page_disown() and cl_page_unassume().
621 4.3. Page Transfer Locking
622 ==========================
624 cl_page implements a simple locking design: as noted above, a page is protected
625 by a VM lock while IO owns it. The same lock is kept while the page is in
626 transfer. Note that this is different from the standard Linux kernel behavior
627 where page write-out is protected by a lock (PG_writeback) separate from VM
628 lock (PG_locked). It is felt that this single-lock design is more portable and,
629 moreover, Lustre cannot benefit much from a separate write-out lock due to LDLM
635 See documentation for cl_object.h:cl_page_operations. See cl_page state
636 descriptions in documentation for cl_object.h:cl_page_state.
638 4.5. Page Initialization
639 ========================
641 cl_page is the most frequently allocated and freed entity in the CLIO stack. In
642 order to improvement the performance of allocation and free, a cl_page, along
643 with the corresponding cl_page_slice for each layer, is allocated as a single
646 Now that the CLIO can support different type of object layout, and each layout
647 may lead to different cl_page in size. When an object is initialized, the object
648 initialization method ->loo_object_init() for each layer will decide the size of
649 buffer for cl_page_slice by calling cl_object_page_init(). cl_object_page_init()
650 will add the size to coh_page_bufsize of top cl_object_header and co_slice_off
651 of the corresponding cl_object is used to remember the offset of page slice for
658 A struct cl_lock represents an extent lock on cached file or stripe data.
659 cl_lock is used only to collect the lock requirement to complete an IO. Due to
660 the existence of MMAP, it may require more than one lock to complete an IO.
661 Except the cases of direct IO and lockless IO, a cl_lock will be attached to a
662 LDLM lock on the OSC layer.
664 As locks protect cached data, and the unit of data caching is a page, locks are
670 The lock requirements are collected in cl_io_lock(). In cl_io_lock(), the
671 ->cio_lock() method for each layers are invoked to decide the lock extent by
672 IO region, layout, and buffers. For example, in the VVP layer, it has to search
673 buffers of IO and if the buffers belong to a Lustre file mmap region, the locks
674 for the corresponding file will be requred.
676 Once the lock requirements are collected, cl_lock_request() is called to create
677 and initialize individual locks. In cl_lock_request(), ->clo_enqueue() is called
678 for each layers. Especially on the OSC layer, osc_lock_enqueue() is called to
679 match or create LDLM lock to fulfill the lock requirement.
681 cl_lock is not cacheable. The locks will be destroyed after IO is complete. The
682 lock destroying process starts from cl_io_unlock() where cl_lock_release() is
683 called for each cl_lock. In cl_lock_release(), ->clo_cancel() methods are called
684 for each layer to release the resource held by cl_lock. The most important
685 resource held by cl_lock is the LDLM lock on the OSC layer. It will be released
686 by osc_lock_cancel(). LDLM locks can still be cached in memory after being
687 detached from cl_lock.
689 5.2. cl_lock and LDLM Lock
690 ==========================
692 As the result of enqueue, an LDLM lock is attached to a cl_lock_slice on the OSC
693 layer. The field ols_dlmlock in the osc_lock points to the LDLM lock.
695 When an LDLM lock is attached to a osc_lock, its use count(l_readers,l_writers)
696 is increased therefore it can't be revoked during this time. A LDLM lock can be
697 shared by multiple osc_lock, in that case, each osc_lock will hold the LDLM lock
698 according to their use type, i.e., increase l_readers or l_writers respectively.
700 When a cl_lock is cancelled, the corresponding LDLM lock will be released.
701 Cancellation of cl_lock does not necessarily cause the underlying LDLM lock to
702 be cancelled. The LDLM lock can be cached in the memory unless it's being
705 To cache a page in the client memory, the page index must be covered by at
706 least one LDLM lock's extent. Please refer to section 5.3 for the details of
709 5.3. Use Case: Lock Invalidation
710 ================================
712 To demonstrate how objects, pages and lock data-structures interact, let's look
713 at the example of stripe lock invalidation.
715 Imagine that on the client C0 there is a file object F, striped over stripes
716 S0, S1 and S2 (files and stripes are represented by cl_object_header). Further,
717 C0 just finished a write IO to file F's offset [a, b] and left some clean and
718 dirty pages in C0 and corresponding LDLM lock LS0, LS1, and LS2 for the
719 corresponding stripes of S0, S1, and S2 respectively. From section 4.1, the
720 caching pages stay in the radix tree of S0, S1 and S2.
722 Some other client requests a lock that conflicts with LS1. The OST where S1
723 lives, sends a blocking AST to C0.
725 C0's LDLM invokes lock->l_blocking_ast(), which is osc_ldlm_blocking_ast(),
726 which eventually calls osc_cache_writeback_range() with the corresponding LDLM
727 lock's extent as parameters. To find the pages covered by a LDLM lock,
728 LDLM lock stores a pointer to osc_object in its l_ast_data.
730 In osc_cache_writeback_range(), it will check if there exist dirty pages for
731 the extent of this lock. If that is the case, an OST_WRITE RPC will be issued
732 to write the dirty pages back.
734 Once all pages are written, they are removed from radix-trees and destroyed.
735 osc_lock_discard_pages() is called for this purpose. It will look up radix tree
736 and then discard every page the extent covers.
742 An IO context (struct cl_io) is a layered object describing the state of an
743 ongoing IO operation (such as a system call).
748 There are two classes of IO contexts, represented by cl_io:
750 - An IO for a specific type of client activity, enumerated by enum cl_io_type:
752 . CIT_READ: read system call including read(2), readv(2), pread(2),
754 . CIT_WRITE: write system call;
755 . CIT_SETATTR: truncate and utime system call;
756 . CIT_FAULT: page fault handling;
757 . CIT_FSYNC: fsync(2) system call, ->writepages() writeback request;
759 - A `catch-all' CIT_MISC IO type for all other IO activity:
761 . cancellation of an extent lock,
762 . VM induced page write-out,
764 . other miscellaneous stuff.
766 The difference between CIT_MISC and other IO types is that CIT_MISC IO is
767 merely a context in which pages are owned and locks are enqueued, whereas
768 other IO types, in addition to being a context, are also state machines.
770 6.2. IO State Machine
771 =====================
773 The idea behind the cl_io state machine is that initial `work' that has to be
774 done (e.g., writing a 3MB user buffer into a given file) is done as a sequence
775 of `iterations', and an iteration is executed as following an idiomatic
778 - Prepare: determine what work is to be done at this iteration;
780 - Lock: enqueue and acquire all locks necessary to perform this iteration;
782 - Start: either perform iteration work synchronously, or post it
783 asynchronously, or both;
785 - End: wait for the completion of asynchronous work;
787 - Unlock: release locks, acquired at the "lock" step;
789 - Finalize: finalize iteration state.
791 cl_io is a layered entity and each step above is performed by invoking the
792 corresponding cl_io_operations method on every layer. As will be explained
793 below, this is especially important in the `prepare' step, as it allows layers
794 to cooperate in determining the scope of the current iteration.
796 Before an IO can be started, the client has to make sure that the object's
797 layout is valid. The client will check there exists a valid layout lock being
798 cached in the client side memory, otherwise, ll_layout_refresh() has to be
799 called to fetch uptodate layout from the MDT side.
801 For CIT_READ or CIT_WRITE IO, a typical scenario is splitting the original user
802 buffer into chunks that map completely inside of a single stripe in the target
803 file, and processing each chunk as a separate iteration. In this case, it is
804 the LOV layer that (in lov_io_rw_iter_init() function) determines the extent of
805 the current iteration.
807 Once the iteration is prepared, the `lock' step acquires all necessary DLM
808 locks to cover the region of a file that is affected by the current iteration.
809 The `start' step does the actual processing, which for write means placing
810 pages from the user buffer into the cache, and for read means fetching pages
811 from the server, including read-ahead pages (see `immediate transfer' below).
812 Truncate and page fault are executed in one iteration (currently that is, it's
813 easy to change truncate implementation to, for instance, truncate each stripe
814 in a separate iteration, should the need arise).
819 One important planned generalization of this model is an out of order execution
822 A motivating example for this is a write of a large user level buffer,
823 overlapping with multiple stripes. Typically, a busy Lustre client has its
824 per-OSC caches for the dirty pages nearly full, which means that the write
825 often has to block, waiting for the cache to drain. Instead of blocking the
826 whole IO operation, CIT_WRITE might switch to the next stripe and try to do IO
827 there. Without such a `non-blocking' IO, a slow OST or an unfair network
828 degrades the performance of the whole cluster.
830 Another example is a legacy single-threaded application running on a multi-core
831 client machine, where IO throughput is limited by the single thread copying
832 data between the user buffer to the kernel pages. Multiple concurrent IO
833 iterations that can be scheduled independently on the available processors
834 eliminate this bottleneck by copying the data in parallel.
836 Obviously, parallel IO is not compatible with the usual `sequential IO'
837 semantics. For example, POSIX read and write have a very simple failure model,
838 where some initial (possibly empty) segment of a user buffer is processed
839 successfully, and none of the remaining bytes were read and written. Parallel
840 IO can fail in much more complex ways.
842 For now, only sequential iterations are supported.
844 6.4. Data-flow: From Stack to IO Slice
845 ======================================
847 The parallel IO design outlined above implies that an ongoing IO can be
848 preempted by other IO and later resumed, all potentially in the same thread.
849 This means that IO state cannot be kept on a stack, as it is customarily done
850 in UNIX file system drivers. Instead, the layered cl_io is used to store
851 information about the current iteration and progress within it. Coincidentally
852 (almost) this is similar to the way IO requests are used by the Windows driver
855 A set of common fields in struct cl_io describe the IO and are shared by all
856 layers. Important properties so described include:
860 - A file (struct cl_object) against which this IO is executed;
862 - A position in a file where the read or write is taking place, and a count of
863 bytes remaining to be processed (for CIT_READ and CIT_WRITE);
865 - A size to which file is being truncated or expanded (for CIT_SETATTR);
867 - A list of locks acquired for this IO;
869 Each layer keeps IO state in its `IO slice', described below, with all slices
870 chained to the list hanging off of struct cl_io:
872 - vvp_io is used by the top-most layer of the Linux kernel client.
874 The most important state in vvp_io is an array of struct iovec, describing
875 user space buffers from or to which IO is taking place. Note that other
876 layers in the IO stack have no idea that data actually came from user space.
878 vvp_io contains kernel specific fields, such as VM information describing a
879 page fault, or the sendfile target.
881 - lov_io: IO state private for the LOV layer is kept here. The most important IO
882 state at the LOV layer is an array of sub-IO's. Each sub-IO is a normal
883 struct cl_io, representing a part of the IO process for a given iteration.
884 With current sequential iterations, only one sub-IO is active at a time.
886 - osc_io: this slice stores IO state private to the OSC layer that exists within
887 each sub-IO created by LOV.
893 7.1. Immediate vs. Opportunistic Transfers
894 ==========================================
896 There are two possible modes of transfer initiation on the client:
898 - Immediate transfer: this is started when a high level IO wants a page or a
899 collection of pages to be transferred right away. Examples: read-ahead,
900 a synchronous read in the case of non-page aligned write, page write-out as
901 part of an extent lock cancellation, page write-out as a part of memory
902 cleansing. Immediate transfer can be both cl_req_type::CRT_READ and
903 cl_req_type::CRT_WRITE;
905 - Opportunistic transfer (cl_req_type::CRT_WRITE only), that happens when IO
906 wants to transfer a page to the server some time later, when it can be done
907 efficiently. Example: pages dirtied by the write(2) path. Pages submitted for
908 an opportunistic transfer are kept in a "staging area".
910 In any case, a transfer takes place in the form of a network RPC.
912 Pages queued for an opportunistic transfer are placed into a staging area
913 (represented as a set of per-object and per-device queues at the OSC layer)
914 until it is decided that an efficient RPC can be composed of them. This
915 decision is made by "a req-formation engine", currently implemented as part of
916 the OSC layer. Req-formation depends on many factors: the size of the resulting
917 RPC, RPC alignment, whether or not multi-object RPCs are supported by the
918 server, max-RPC-in-flight limitations, size of the staging area, etc. CLIO uses
919 osc_extent to group pages for req-formation. osc_extent are further managed in
920 a per-object red-black tree for efficient RPC formatting.
922 Whenever a page from cl_page_list is added to a newly constructed req, its
923 cl_page_operations::cpo_prep() layer methods are called. At that moment, the
924 page state is atomically changed from cl_page_state::CPS_OWNED to
925 cl_page_state::CPS_PAGEOUT or cl_page_state::CPS_PAGEIN, cl_page::cp_owner is
926 zeroed, and cl_page::cp_req is set to the req. cl_page_operations::cpo_prep()
927 method at a particular layer might return -EALREADY to indicate that it does
928 not need to submit this page at all. This is possible, for example, if a page
929 submitted for read became up-to-date in the meantime; and for write, if the
930 page don't have dirty bit set. See cl_io_submit_rw() for details.
932 Whenever a staged page is added to a newly constructed req, its
933 cl_page_operations::cpo_make_ready() layer methods are called. At that moment,
934 the page state is atomically changed from cl_page_state::CPS_CACHED to
935 cl_page_state::CPS_PAGEOUT, and cl_page::cp_req is set to req. The
936 cl_page_operations::cpo_make_ready() method at a particular layer might return
937 -EAGAIN to indicate that this page is not currently eligible for the transfer.
939 The RPC engine guarantees that once the ->cpo_prep() or ->cpo_make_ready()
940 method has been called, the page completion routine (->cpo_completion() layer
941 method) will eventually be called (either as a result of successful page
942 transfer completion, or due to timeout).
944 To summarize, there are two main entry points into transfer sub-system:
946 - cl_io_submit_rw(): submits a list of pages for immediate transfer;
948 - cl_io_commit_async(): places a list of pages into staging area for future
949 opportunistic transfer.
954 To submit a group of pages for immediate transfer struct cl_2queue is used. It
955 contains two page lists: qin (input queue) and qout (output queue). Pages are
956 linked into these queues by cl_page::cp_batch list heads. Qin is populated with
957 the pages to be submitted to the transfer, and pages that were actually
958 submitted are placed onto qout. Not all pages from qin might end up on qout due
961 - ->cpo_prep() methods deciding that page should not be transferred, or
963 - unrecoverable submission error.
965 Pages not moved to qout remain on qin. It is up to the transfer submitter to
966 decide when to remove pages from qin and qout. Remaining pages on qin are
967 usually removed from this list right after (partially unsuccessful) transfer
968 submission. Pages are usually left on qout until transfer completion. This way
969 the caller can determine when all pages from the list were transferred.
971 The association between a page and an immediate transfer queue is protected by
972 cl_page::cl_mutex. This mutex is acquired when a cl_page is added in a
973 cl_page_list and released when a page is removed from the list.
975 7.3. Page Completion Handlers, Synchronous Transfer
976 ===================================================
978 When a transfer completes, for every transfer page, per-layer page completion
979 methods ->cpo_completion() are invoked. The page is still under the VM lock at
980 this moment. Completion methods are called bottom-to-top and it is
981 responsibility of the last of them (i.e., the completion method of the top-most
982 layer---VVP) to release the VM lock.
984 Both immediate and opportunistic transfers are asynchronous in the sense that
985 control can return to the caller before the transfer completes. CLIO doesn't
986 provide a synchronous transfer interface at all and it is up to a particular
987 caller to implement it if necessary. The simplest way to wait for the transfer
988 completion is wait on a page VM lock. This approach is used implicitly by the
989 Linux kernel. There is a case, though, where one wants to do transfer
990 completely synchronously without releasing the page VM lock: when
991 ->prepare_write() method determines that a write goes from a non page-aligned
992 buffer into a not up-to-date page, a portion of a page has to be fetched from
993 the server. The VM page lock cannot be used to synchronize transfer completion
994 in this case, because it is used to mark the page as owned by IO. To handle
995 this, VVP attaches struct cl_sync_io to struct vvp_page. cl_sync_io contains a
996 number of pages still in IO and a synchronization primitive (struct completion)
997 which is signalled when transfer of the last page completes. The VVP page
998 completion handler (vvp_page_completion_common()) checks for attached
999 cl_sync_io and if it is there, decreases the number of in-flight pages and
1000 signals completion when that number drops to 0. A similar mechanism is used for
1007 8.1. Motivation, Server Environment Usage
1008 =========================================
1010 lu_env and related data-types (struct lu_context and struct lu_context_key)
1011 together implement a memory pre-allocation interface that Lustre uses to
1012 decrease stack consumption without resorting to fully dynamic allocation.
1014 Stack space is severely limited in the Linux kernel. Lustre traditionally
1015 allocated a lot of automatic variables, resulting in spurious stack overflows
1016 that are hard to trigger (they usually need a certain combination of driver
1017 calls and interrupts to happen, making them extremely difficult to reproduce)
1018 and debug (as stack overflow can easily result in corruption of thread-related
1019 data-structures in the kernel memory, confusing the debugger).
1021 The simplest way to handle this is to replace automatic variables with calls
1022 to the generic memory allocator, but
1024 - The generic allocator has scalability problems, and
1026 - Additional code to free allocated memory is needed.
1028 The lu_env interface was originally introduced in the MDS rewrite for Lustre
1029 2.0 and matches server-side threading model very well. Roughly speaking,
1030 lu_context represents a context in which computation is executed and
1031 lu_context_key is a description of per-context data. In the simplest case
1032 lu_context corresponds to a server thread; then lu_context_key is effectively a
1033 thread-local storage (TLS). For a similar idea see the user-level pthreads
1034 interface pthread_key_create().
1036 More formally, lu_context_key defines a constructor-destructor pair and a tags
1037 bit-mask. When lu_context is initialized (with a given tag bit-mask), a global
1038 array of all registered lu_context_keys is scanned, constructors for all keys
1039 with matching tags are invoked and their return values are stored in
1042 Once lu_context has been initialized, a value of any key allocated for this
1043 context can be retrieved very efficiently by indexing in the per-context
1044 array. lu_context_key_get() function is used for this.
1046 When context is finalized, destructors are called for all keys allocated in
1049 The typical server usage is to have a lu_context for every server thread,
1050 initialized when the thread is started. To reduce stack consumption by the
1051 code running in this thread, a lu_context_key is registered that allocates in
1052 its constructor a struct containing as fields values otherwise allocated on
1053 the stack. See {mdt,osd,cmm,mdd}_thread_info for examples. Instead of doing
1055 int function(args) {
1056 /* structure "bar" in module "foo" */
1060 the code roughly does
1062 struct foo_thread_info {
1063 struct foo_bar fti_bar;
1067 int function(const struct lu_env *env, args) {
1068 struct foo_bar *bar;
1070 bar = &lu_context_key_get(&env->le_ctx, &foo_thread_key)->fti_
1074 struct lu_env contains 2 contexts:
1076 - le_ctx: this context is embedded in lu_env. By convention, this context is
1077 used _only_ to avoid allocations on the stack, and it should never be used to
1078 pass parameters between functions or layers. The reason for this restriction
1079 is that using contexts for implicit state sharing leads to a code that is
1080 difficult to understand and modify.
1082 - le_ses: this is a pointer to a context shared by all threads handling given
1083 RPC. Context itself is embedded into struct ptlrpc_request. Currently a
1084 request is always processed by a single thread, but this might change in the
1085 future in a design where a small pool of threads processes RPCs
1088 Additionally, state kept in env->le_ses context is shared by multiple layers.
1089 For example, remote user credentials are stored there.
1091 8.2. Client Environment Usage
1092 =============================
1094 On a client there is a lu_env associated with every thread executing Lustre
1095 code. Again, it contains &env->le_ctx context used to reduce stack consumption.
1096 env->le_ses is used to share state between all threads handling a given IO.
1097 Again, currently an IO is processed by a single thread. env->le_ses is used to
1098 efficiently allocate cl_io slices ({vvp,lov,osc}_io).
1100 There are three important differences with lu_env usage on the server:
1102 - While on the server there is a fixed pool of threads, any client thread can
1103 execute Lustre code. This makes it impractical to pre-allocate and
1104 pre-initialize lu_context for every thread. Instead, contexts are constructed
1105 on demand and after use returned into a global cache that amortizes creation
1108 - Client call-chains frequentyly cross Lustre-VFS and Lustre-VM boundaries.
1109 This means that just passing lu_env as a first parameter to every Lustre
1110 function and method is not enough. To work around this problem, a pointer to
1111 lu_env is stored in a field in the kernel data-structure associated with the
1112 current thread (task_struct::journal_info), from where it is recovered when
1113 Lustre code is re-entered from VFS or VM;
1115 - Sometimes client code is re-entered in a fashion that precludes re-use of the
1116 higher level lu _env. For example, when a read or write incurs a page fault
1117 in the user space buffer memory-mapped from a Lustre file, page fault
1118 handling is a separate IO, independent of the already ongoing system call.
1119 The Lustre page fault handler allocates a new lu_env (by calling
1120 lu_env_get_nested()) in which the nested IO is going on. A similar situation
1121 occurs when client DLM lock LRU shrinking code is invoked in the context of a
1124 8.3. Sub-environments
1125 =====================
1127 As described above, lu_env (specifically, lu_env->le_ses) is used on a client
1128 to allocate per-IO state, including foo_io data on every layer. This leads to a
1129 complication at the LOV layer, which maintains multiple sub-IOs. As layers
1130 below LOV allocate their IO slices in lu_env->le_ses, LOV has to allocate an
1131 lu_env for every sub-IO and to carefully juggle them when invoking lower layer
1132 methods. The case of a single IO is optimized by re-using the top-environment.
1141 Lookup ends up calling ll_update_inode() to setup a new inode with a given
1142 meta-data descriptor (obtained from the meta-data path). cl_inode_init() calls
1143 cl_object_find() eventually calling lu_object_find_try() that either finds a
1144 cl_object in the cache or allocates a new one, calling
1145 lu_device_operations::ldo_object_{alloc,init}() methods on every layer top to
1146 bottom. Every layer allocates its private data structure ({vvp,lov}_object) and
1147 links it into an object header (cl_object_header) by calling lu_object_add().
1148 At the VVP layer, vvp_object contains a pointer to the inode. The LOV layer
1149 allocates a lov_object containing an array of pointers to sub-objects that are
1150 found in the cache or allocated by calling cl_object_find (recursively). These
1151 sub-objects have LOVSUB and OSC layer data.
1153 A top-object and its sub-objects are inserted into a global FID-based hash
1154 table and a global LRU list.
1156 9.2. First IO to a File
1157 =======================
1159 After an object is instantiated as described in the previous use case, the
1160 first IO call against this object has to create DLM locks. The following
1161 operations re-use cached locks (see below).
1163 A read call starts at ll_file_readv() which eventually calls
1164 ll_file_io_generic(). This function calls cl_io_init() to initialize an IO
1165 context, which calls the cl_object_operations::coo_io_init() method on every
1166 layer. As in the case of object instantiation, these methods allocate
1167 layer-private IO state ({vvp,lov}_io) and add it to the list hanging off of the
1168 IO context header cl_io by calling cl_io_add(). At the VVP layer, vvp_io_init()
1169 handles special cases (like count == 0), updates statistic counters, and in the
1170 case of write it takes a per-inode semaphore to avoid possible deadlock.
1172 At the LOV layer, lov_io_init_raid0() allocates a struct lov_io and stores in
1173 it the original IO parameters (starting offset and byte count). This is needed
1174 because LOV is going to modify these parameters. Sub-IOs are not allocated at
1175 this point---they are lazily instantiated later.
1177 Once the top-IO has been initialized, ll_file_io_generic() enters the main IO
1178 loop cl_io_loop() that drives IO iterations, going through
1180 - cl_io_iter_init() calling cl_io_operations::cio_iter_init() top-to-bottom
1181 - cl_io_lock() calling cl_io_operations::cio_lock() top-to-bottom
1182 - cl_io_start() calling cl_io_operations::cio_start() top-to-bottom
1183 - cl_io_end() calling cl_io_operations::cio_end() bottom-to-top
1184 - cl_io_unlock() calling cl_io_operations::cio_unlock() bottom-to-top
1185 - cl_io_iter_fini() calling cl_io_operations::cio_iter_fini() bottom-to-top
1186 - cl_io_rw_advance() calling cl_io_operations::cio_advance() bottom-to-top
1188 repeatedly until cl_io::ci_continue remains 0 after an iteration. These "IO
1189 iterations" move an IO context through consecutive states (see enum
1190 cl_io_state). ->cio_iter_init() decides at each layer what part of the
1191 remaining IO is to be done during current iteration. Currently,
1192 lov_io_rw_iter_init() is the only non-trivial implementation of this method. It
1195 - Except for the cases of truncate and O_APPEND write, it shrinks the IO extent
1196 recorded in the top-IO (starting offset and bytes count) so that this extent
1197 is fully contained within a single stripe. This avoids "cascading evictions";
1199 - It allocates sub-IOs for all stripes intersecting with the resulting IO range
1200 (which, in case of non-append write or read means creating single sub-io) by
1201 calling cl_io_init() that (as above) creates a cl_io context with lovsub_io
1202 and osc_io layers. The initialized cl_io is primed from the top-IO
1203 (lov_io_sub_inherit()) and cl_io_iter_init() is called against it;
1205 - Finally all sub-ios for the current iteration are linked together into a
1206 lov_io::lis_active list.
1208 Now we have a top-IO and its sub-IO in CIS_IT_STARTED state. cl_io_lock()
1209 collects locks on all layers without actually enqueuing them: vvp_io_rw_lock()
1210 requests a lock on the IO extent (possibly shrunk by LOV, see above) and
1211 optionally on extents of Lustre files that happen to be memory-mapped onto the
1212 user-level buffer used for this IO. In the future layers like SNS might request
1213 additional locks, e.g., to protect parity blocks.
1215 Locks requested by ->cio_lock() methods are added to the cl_lockset embedded
1216 into top cl_io. The lockset contains 2 lock queues: "todo" and "done". Locks
1217 are initially placed in the todo queue. Once locks from all layers have been
1218 collected, they are sorted to avoid deadlocks (cl_io_locks_sort()) and then
1219 enqueued by cl_lockset_lock(). The locks will be moved from todo list into
1220 "done" list when they are granted.
1222 At this stage we have top- and sub-IO in the CIS_LOCKED state with all needed
1223 locks held. cl_io_start() moves cl_io into CIS_IO_GOING mode and calls
1224 ->cio_start() method. In the VVP layer this method invokes some version of
1225 generic_file_{read,write}() function.
1227 In the case of read, generic_file_read() calls for every non-up-to-date page
1228 the a_ops->readpage() method that eventually (after obtaining cl_page
1229 corresponding to the VM page supplied to it) calls ll_io_read_page() where it
1230 decides if it's necessary to read ahead more pages by calling ll_readahead().
1231 The number of pages to be read ahead is determined by the read pattern, also
1232 it will factor in the requirements from different layers in CLIO stack, for
1233 example, stripe alignment on the LOV layer and DLM lock coverage on the OSC
1234 layer. The callback ->cio_read_ahead() is used to gather the requirements from
1235 each layer. Please refer to lov_io_read_ahead() and osc_io_read_ahead() for
1238 ll_readahead() populates a queue by a target page and pages from read-ahead
1239 window. The resulting queue is then submitted for immediate transfer by calling
1240 cl_io_submit_rw() which ends up calling osc_io_submit_page() for every
1241 not-up-to-date page in the queue.
1243 ->readpage() returns at this point, and the VM waits on a VM page lock, which
1244 is released by the transfer completion handler before copying page data to the
1247 In the case of write, generic_file_write() calls the a_ops->write_begin() and
1248 a_ops->write_end() address space methods that end up calling ll_write_begin()
1249 and ll_write_end() respectively. These functions follow the normal Linux
1250 protocol for write, including a possible synchronous read of a non-overwritten
1251 part of a page (ll_page_sync_io() call in ll_prepare_partial_page()). The pages
1252 are placed into a list of vui_queue of vvp_io. In the normal case the pages will
1253 be committed after all pages are handled by calling vvp_io_write_commit(). In
1254 vvp_io_commit_write(), it calls cl_io_commit_async() to submit the dirty pages
1255 into OSC writeback cache, where grant are allocated and the pages are added into
1256 a red-black tree of osc_extent. In case there is no enough grant on the client
1257 side, cl_io_commit_async will fail with -EDQUOT and the pages are transferred
1258 immediately by calling ll_page_sync_io().
1260 9.3. Lock-less and No-cache IO
1261 ==============================
1263 IO context has a "locking mode" selected from MAYBE, NEVER or MANDATORY set
1264 (enum cl_io_lock_dmd), that specifies what degree of distributed cache
1265 coherency is assumed by this IO. MANDATORY mode requires all caches accessed by
1266 this IO to be protected by distributed locks. In NEVER mode no distributed
1267 coherency is needed at the expense of not caching the data. This mode is
1268 required for the cases where client can not or will not participate in the
1269 cache coherency protocol (e.g., a liblustre client that cannot respond to the
1270 lock blocking call-backs while in the compute phase). In MAYBE mode some of the
1271 caches involved in this IO are used and are globally coherent, and some other
1272 caches are bypassed.
1274 O_APPEND writes and truncates are always executed in MANDATORY mode. All other
1275 calls are executed in NEVER mode by liblustre (see below) and in MAYBE mode by
1276 a normal Linux client.
1278 In MAYBE mode every OSC individually decides whether to use DLM. An OST might
1279 return -EUSERS to an enqueue RPC indicating that the stripe in question is
1280 contended and that the client should switch to the lockless IO mode. If this
1281 happens, OSC, instead of using ldlm_lock, creates a special "lockless OSC lock"
1282 that is not backed up by a DLM lock. This lock conflicts with any other lock in
1283 its range and self-cancels when its last user is removed. As a result, when IO
1284 proceeds to the stripe that is in lockless mode, all conflicting extent locks
1285 are cancelled, purging the cache. When IO against this stripe ends, the lock is
1286 cancelled, sending dirty pages (just placed in the cache by IO) back to the
1287 server and invalidating the cache again. "Lockless locks" allow lockless and
1288 no-cache IO mode to be implemented by the same code paths as cached IO.