summary |
shortlog |
log |
commit | commitdiff |
tree
raw |
patch |
inline | side by side (from parent 1:
9cc0a5f)
During a writing, if there is a page can not be mapped to blocks
at once, it will cause "ASSERTION( iobuf->dr_rw == 0 )" crash
which leads by the overflow access of mapped blocks array.
This will happen on Arm platforms easily with 64KB PAGE_SIZE.
And will not happen on x86_64 platforms with 4KB PAGE_SIZE.
Because for 4KB block size, if PAGE_SIZE is 4KB, then i == 0
and blocks_left_page == 1. Which makes the inner loop each time
handle one block. Thus the outer loop condition "block_idx <
block_idx_end;" insures blocks[] array access not overflow.
Check the actual mapped count so that mapped blocks array
access will not overflow.
Fixes:
0271b17b80a8 ("LU-14134 osd-ldiskfs: reduce credits for new writing")
Change-Id: Icd46c04bea2d7930456840694d422758eebb4186
Signed-off-by: Xinliang Liu <xinliang.liu@linaro.org>
Reviewed-on: https://review.whamcloud.com/45288
Tested-by: jenkins <devops@whamcloud.com>
Reviewed-by: Andreas Dilger <adilger@whamcloud.com>
Reviewed-by: James Simmons <jsimmons@infradead.org>
Tested-by: Maloo <maloo@whamcloud.com>
Reviewed-by: Li Dongyang <dongyangli@ddn.com>
Reviewed-by: Oleg Drokin <green@whamcloud.com>
for (page_idx = page_idx_start, block_idx = start_blocks;
block_idx < block_idx_end; page_idx++,
block_idx += blocks_left_page) {
for (page_idx = page_idx_start, block_idx = start_blocks;
block_idx < block_idx_end; page_idx++,
block_idx += blocks_left_page) {
+ /* For cases where the filesystems blocksize is not the
+ * same as PAGE_SIZE (e.g. ARM with PAGE_SIZE=64KB and
+ * blocksize=4KB), there will be multiple blocks to
+ * read/write per page. Also, the start and end block may
+ * not be aligned to the start and end of the page, so the
+ * first page may skip some blocks at the start ("i != 0",
+ * "blocks_left_page" is reduced), and the last page may
+ * skip some blocks at the end (limited by "count").
+ */
page = pages[page_idx];
LASSERT(page_idx < iobuf->dr_npages);
i = block_idx % blocks_per_page;
blocks_left_page = blocks_per_page - i;
page = pages[page_idx];
LASSERT(page_idx < iobuf->dr_npages);
i = block_idx % blocks_per_page;
blocks_left_page = blocks_per_page - i;
- for (page_offset = i * blocksize; i < blocks_left_page;
+ if (block_idx + blocks_left_page > block_idx_end)
+ blocks_left_page = block_idx_end - block_idx;
+ page_offset = i * blocksize;
+ for (i = 0; i < blocks_left_page;
i += nblocks, page_offset += blocksize * nblocks) {
nblocks = 1;
i += nblocks, page_offset += blocksize * nblocks) {
nblocks = 1;