AutoGen Definitions lustre_dlm_flags.tpl; flag[ 0] = { f-name = lock_changed; f-mask = on_wire; f-desc = 'extent, mode, or resource changed'; }; flag[ 1] = { f-name = block_granted; f-mask = on_wire, blocked; f-desc = 'Server placed lock on granted list, or a recovering client wants ' 'the lock added to the granted list, no questions asked.'; }; flag[ 2] = { f-name = block_conv; f-mask = on_wire, blocked; f-desc = <<- _EOF_ Server placed lock on conv list, or a recovering client wants the lock added to the conv list, no questions asked. _EOF_; }; flag[ 3] = { f-name = block_wait; f-mask = on_wire, blocked; f-desc = <<- _EOF_ Server placed lock on wait list, or a recovering client wants the lock added to the wait list, no questions asked. _EOF_; }; // Skipped bit 4 flag[ 5] = { f-name = ast_sent; f-mask = on_wire; f-desc = 'blocking or cancel packet was queued for sending.'; }; // Skipped bits 6 and 7 flag[ 8] = { f-name = replay; f-mask = on_wire; f-desc = <<- _EOF_ Lock is being replayed. This could probably be implied by the fact that one of BLOCK_{GRANTED,CONV,WAIT} is set, but that is pretty dangerous. _EOF_; }; flag[ 9] = { f-name = intent_only; f-mask = on_wire; f-desc = "Don't grant lock, just do intent."; }; // Skipped bits 10 and 11 flag[12] = { f-name = has_intent; f-mask = on_wire; f-desc = 'lock request has intent'; }; // Skipped bits 13, 14 and 15 flag[16] = { f-name = discard_data; f-mask = on_wire; f-desc = 'discard (no writeback) on cancel'; }; flag[17] = { f-name = no_timeout; f-mask = on_wire; f-desc = 'Blocked by group lock - wait indefinitely'; }; flag[18] = { f-name = block_nowait; f-mask = on_wire; f-desc = <<- _EOF_ Server told not to wait if blocked. For AGL, OST will not send glimpse callback. _EOF_; }; flag[19] = { f-name = test_lock; f-mask = on_wire; f-desc = 'return blocking lock'; }; // Skipped bits 20, 21, and 22 flag[23] = { f-name = cancel_on_block; f-mask = on_wire, inherit; f-desc = <<- _EOF_ Immediatelly cancel such locks when they block some other locks. Send cancel notification to original lock holder, but expect no reply. This is for clients (like liblustre) that cannot be expected to reliably response to blocking AST. _EOF_; }; // Skipped bits 24 through 29 flag[30] = { f-name = deny_on_contention; f-mask = on_wire; f-desc = 'measure lock contention and return -EUSERS if locking contention ' 'is high'; }; flag[31] = { f-name = ast_discard_data; f-mask = on_wire, ast; f-desc = <<- _EOF_ These are flags that are mapped into the flags and ASTs of blocking locks Add FL_DISCARD to blocking ASTs _EOF_; }; flag[32] = { f-name = fail_loc; f-mask = local_only; f-desc = <<- _EOF_ Used for marking lock as a target for -EINTR while cp_ast sleep emulation + race with upcoming bl_ast. _EOF_; }; flag[33] = { f-name = skipped; f-mask = local_only; f-desc = <<- _EOF_ Used while processing the unused list to know that we have already handled this lock and decided to skip it. _EOF_; }; flag[34] = { f-name = cbpending; f-mask = local_only, hide_lock; f-desc = 'this lock is being destroyed'; }; flag[35] = { f-name = wait_noreproc; f-mask = local_only; f-desc = 'not a real flag, not saved in lock'; }; flag[36] = { f-name = cancel; f-mask = local_only; f-desc = 'cancellation callback already run'; }; flag[37] = { f-name = local_only; f-mask = local_only, hide_lock; f-desc = 'whatever it might mean'; }; flag[38] = { f-name = failed; f-mask = local_only, gone, hide_lock; f-desc = "don't run the cancel callback under ldlm_cli_cancel_unused"; }; flag[39] = { f-name = canceling; f-mask = local_only; f-desc = 'lock cancel has already been sent'; }; flag[40] = { f-name = local; f-mask = local_only; f-desc = 'local lock (ie, no srv/cli split)'; }; flag[41] = { f-name = lvb_ready; f-mask = local_only; f-desc = <<- _EOF_ XXX FIXME: This is being added to b_size as a low-risk fix to the fact that the LVB filling happens _after_ the lock has been granted, so another thread can match it before the LVB has been updated. As a dirty hack, we set LDLM_FL_LVB_READY only after we've done the LVB poop. this is only needed on LOV/OSC now, where LVB is actually used and callers must set it in input flags. The proper fix is to do the granting inside of the completion AST, which can be replaced with a LVB-aware wrapping function for OSC locks. That change is pretty high-risk, though, and would need a lot more testing. _EOF_; }; flag[42] = { f-name = kms_ignore; f-mask = local_only; f-desc = <<- _EOF_ A lock contributes to the known minimum size (KMS) calculation until it has finished the part of its cancelation that performs write back on its dirty pages. It can remain on the granted list during this whole time. Threads racing to update the KMS after performing their writeback need to know to exclude each other's locks from the calculation as they walk the granted list. _EOF_; }; flag[43] = { f-name = cp_reqd; f-mask = local_only; f-desc = 'completion AST to be executed'; }; flag[44] = { f-name = cleaned; f-mask = local_only; f-desc = 'cleanup_resource has already handled the lock'; }; flag[45] = { f-name = atomic_cb; f-mask = local_only, hide_lock; f-desc = <<- _EOF_ optimization hint: LDLM can run blocking callback from current context w/o involving separate thread. in order to decrease cs rate _EOF_; }; flag[46] = { f-name = bl_ast; f-mask = local_only; f-desc = <<- _EOF_ It may happen that a client initiates two operations, e.g. unlink and mkdir, such that the server sends a blocking AST for conflicting locks to this client for the first operation, whereas the second operation has canceled this lock and is waiting for rpc_lock which is taken by the first operation. LDLM_FL_BL_AST is set by ldlm_callback_handler() in the lock to prevent the Early Lock Cancel (ELC) code from cancelling it. LDLM_FL_BL_DONE is to be set by ldlm_cancel_callback() when lock cache is dropped to let ldlm_callback_handler() return EINVAL to the server. It is used when ELC RPC is already prepared and is waiting for rpc_lock, too late to send a separate CANCEL RPC. _EOF_; }; flag[47] = { f-name = bl_done; f-mask = local_only; f-desc = 'whatever it might mean'; }; flag[48] = { f-name = no_lru; f-mask = local_only; f-desc = <<- _EOF_ Don't put lock into the LRU list, so that it is not canceled due to aging. Used by MGC locks, they are cancelled only at unmount or by callback. _EOF_; }; flag[49] = { f-name = fail_notified; f-mask = local_only, gone; f-desc = <<- _EOF_ Set for locks that failed and where the server has been notified. Protected by lock and resource locks. _EOF_; }; flag[50] = { f-name = destroyed; f-mask = local_only, gone; f-desc = <<- _EOF_ Set for locks that were removed from class hash table and will be destroyed when last reference to them is released. Set by ldlm_lock_destroy_internal(). Protected by lock and resource locks. _EOF_; }; flag[51] = { f-name = server_lock; f-mask = local_only; f-desc = 'flag whether this is a server namespace lock'; }; flag[52] = { f-name = res_locked; f-mask = local_only; f-desc = <<- _EOF_ It's set in lock_res_and_lock() and unset in unlock_res_and_lock(). NB: compared with check_res_locked(), checking this bit is cheaper. Also, spin_is_locked() is deprecated for kernel code; one reason is because it works only for SMP so user needs to add extra macros like LASSERT_SPIN_LOCKED for uniprocessor kernels. _EOF_; }; flag[53] = { f-name = waited; f-mask = local_only; f-desc = <<- _EOF_ It's set once we call ldlm_add_waiting_lock_res_locked() to start the lock-timeout timer and it will never be reset. Protected by lock and resource locks. _EOF_; }; flag[54] = { f-name = ns_srv; f-mask = local_only; f-desc = 'Flag whether this is a server namespace lock.'; }; flag[55] = { f-name = excl; f-mask = local_only; f-desc = 'Flag whether this lock can be reused. Used by exclusive open.'; };