Due to mechanics of ldlm internals, enqueueing two different ibits
lock on the same resource is deadlock prone.
As such change mdt_object_open_lock to release open lock if it becomes
necessary to get exclusive layout lock (to create objects).
It's ok to release the open lock right away as it's never guaranteed to
be issued anyway.
Change-Id: Ib669e68323ea72c75a0a8bea289d8bea079309b0
Signed-off-by: Oleg Drokin <oleg.drokin@intel.com>
Reviewed-on: http://review.whamcloud.com/8083
Tested-by: Jenkins
Reviewed-by: Patrick Farrell <paf@cray.com>
Reviewed-by: Jinshan Xiong <jinshan.xiong@intel.com>
Tested-by: Maloo <hpdd-maloo@intel.com>
", open_flags = "LPO64"\n",
PFID(mdt_object_fid(obj)), open_flags);
", open_flags = "LPO64"\n",
PFID(mdt_object_fid(obj)), open_flags);
+ /* We cannot enqueue another lock for the same resource we
+ * already have a lock for, due to mechanics of waiting list
+ * iterating in ldlm, see LU-3601.
+ * As such we'll drop the open lock we just got above here,
+ * it's ok not to have this open lock as it's main purpose is to
+ * flush unused cached client open handles. */
+ if (lustre_handle_is_used(&lhc->mlh_reg_lh))
+ mdt_object_unlock(info, obj, lhc, 1);
+
LASSERT(!try_layout);
mdt_lock_handle_init(ll);
mdt_lock_reg_init(ll, LCK_EX);
LASSERT(!try_layout);
mdt_lock_handle_init(ll);
mdt_lock_reg_init(ll, LCK_EX);
+ if (rc != 0 || !lustre_handle_is_used(&lhc->mlh_reg_lh)) {
struct ldlm_reply *ldlm_rep;
ldlm_rep = req_capsule_server_get(info->mti_pill, &RMF_DLM_REP);
mdt_clear_disposition(info, ldlm_rep, DISP_OPEN_LOCK);
struct ldlm_reply *ldlm_rep;
ldlm_rep = req_capsule_server_get(info->mti_pill, &RMF_DLM_REP);
mdt_clear_disposition(info, ldlm_rep, DISP_OPEN_LOCK);
- mdt_object_unlock(info, obj, lhc, 1);
+ if (lustre_handle_is_used(&lhc->mlh_reg_lh))
+ mdt_object_unlock(info, obj, lhc, 1);