Whamcloud - gitweb
LU-4257 llite: fast read implementation
For read operation, if a page is already in cache, it must be covered
by a DLM lock. We can take advantage of this by reading cached page
without interacting with Lustre. Traditional read will go on if fast
read fails.
This patch can improve small read performance significantly.
These are the performance data I collected:
+------------+----------------+-----------------+
| | read bs=4k | read bs=1M |
+------------+----------------+-----------------+
| w/o patch | 257 MB/s | 1.1 GB/s |
+------------+----------------+-----------------+
| w/ patch | 1.2 GB/s | 1.4 GB/s |
+------------+----------------+-----------------+
Signed-off-by: Jinshan Xiong <jinshan.xiong@intel.com>
Change-Id: I2ebac609a6c80b135ee64ba001f75fa2cc80faf2
Reviewed-on: http://review.whamcloud.com/20255
Tested-by: Jenkins
Reviewed-by: Bobi Jam <bobijam@hotmail.com>
Tested-by: Maloo <hpdd-maloo@intel.com>
Reviewed-by: Andreas Dilger <andreas.dilger@intel.com>
Reviewed-by: Oleg Drokin <oleg.drokin@intel.com>