summary |
shortlog |
log |
commit | commitdiff |
tree
raw |
patch |
inline | side by side (from parent 1:
9821754)
In vvp_set_pagevec_dirty lock order between i_pages and
lock_page_memcg was inverted with the expectation that
no other users would conflict.
However in vvp_page_completion_write the call to
test_clear_page_writeback does expect to be able
to lock_page_memcg then lock i_pages which appears
to conflict with the original analysis.
The reported case shows as RCU stalls with
vvp_set_pagevec_dirty blocked attempting to lock i_pages.
Fixes:
a7299cb012f ("LU-9920 vvp: dirty pages with pagevec")
HPE-bug-id: LUS-8798
Signed-off-by: Shaun Tancheff <shaun.tancheff@hpe.com>
Change-Id: I47c2107ddbef4a76325928e982abfc0ea666f39b
Reviewed-on: https://review.whamcloud.com/38317
Tested-by: jenkins <devops@whamcloud.com>
Tested-by: Maloo <maloo@whamcloud.com>
Reviewed-by: Wang Shilong <wshilong@ddn.com>
Reviewed-by: Patrick Farrell <farr0186@gmail.com>
Reviewed-by: Oleg Drokin <green@whamcloud.com>
struct page *page = pvec->pages[0];
struct address_space *mapping = page->mapping;
unsigned long flags;
struct page *page = pvec->pages[0];
struct address_space *mapping = page->mapping;
unsigned long flags;
+ unsigned long skip_pages = 0;
int count = pagevec_count(pvec);
int dirtied = 0;
int count = pagevec_count(pvec);
int dirtied = 0;
- /* From set_page_dirty */
- for (i = 0; i < count; i++)
- ClearPageReclaim(pvec->pages[i]);
-
+ BUILD_BUG_ON(PAGEVEC_SIZE > BITS_PER_LONG);
LASSERTF(page->mapping,
"mapping must be set. page %p, page->private (cl_page) %p\n",
page, (void *) page->private);
LASSERTF(page->mapping,
"mapping must be set. page %p, page->private (cl_page) %p\n",
page, (void *) page->private);
- /* Rest of code derived from __set_page_dirty_nobuffers */
+ for (i = 0; i < count; i++) {
+ page = pvec->pages[i];
+
+ ClearPageReclaim(page);
+
+ lock_page_memcg(page);
+ if (TestSetPageDirty(page)) {
+ /* page is already dirty .. no extra work needed
+ * set a flag for the i'th page to be skipped
+ */
+ unlock_page_memcg(page);
+ skip_pages |= (1 << i);
+ }
+ }
+
ll_xa_lock_irqsave(&mapping->i_pages, flags);
/* Notes on differences with __set_page_dirty_nobuffers:
ll_xa_lock_irqsave(&mapping->i_pages, flags);
/* Notes on differences with __set_page_dirty_nobuffers:
* 3. No mapping is impossible. (Race w/truncate mentioned in
* dirty_nobuffers should be impossible because we hold the page lock.)
* 4. All mappings are the same because i/o is only to one file.
* 3. No mapping is impossible. (Race w/truncate mentioned in
* dirty_nobuffers should be impossible because we hold the page lock.)
* 4. All mappings are the same because i/o is only to one file.
- * 5. We invert the lock order on lock_page_memcg(page) and the mapping
- * xa_lock, but this is the only function that should use that pair of
- * locks and it can't race because Lustre locks pages throughout i/o.
*/
for (i = 0; i < count; i++) {
page = pvec->pages[i];
*/
for (i = 0; i < count; i++) {
page = pvec->pages[i];
- lock_page_memcg(page);
- if (TestSetPageDirty(page)) {
- unlock_page_memcg(page);
+ /* if the i'th page was unlocked above, skip it here */
+ if ((skip_pages >> i) & 1)
LASSERTF(page->mapping == mapping,
"all pages must have the same mapping. page %p, mapping %p, first mapping %p\n",
page, page->mapping, mapping);
LASSERTF(page->mapping == mapping,
"all pages must have the same mapping. page %p, mapping %p, first mapping %p\n",
page, page->mapping, mapping);