summaryrefslogtreecommitdiff
path: root/drivers/md
diff options
context:
space:
mode:
authorMike Snitzer <snitzer@redhat.com>2012-09-26 23:45:39 +0100
committerAlasdair G Kergon <agk@redhat.com>2012-09-26 23:45:39 +0100
commit307615a26e95406c42c95916a66ba50434567e0f (patch)
tree3fcb794cbd50958839c955ec492b3fb25fd801b6 /drivers/md
parent979570e02981d4a8fc20b3cc8fd651856c98ee9d (diff)
dm thin: do not set discard_zeroes_data
The dm thin pool target claims to support the zeroing of discarded data areas. This turns out to be incorrect when processing discards that do not exactly cover a complete number of blocks, so the target must always set discard_zeroes_data_unsupported. The thin pool target will zero blocks when they are allocated if the skip_block_zeroing feature is not specified. The block layer may send a discard that only partly covers a block. If a thin pool block is partially discarded then there is no guarantee that the discarded data will get zeroed before it is accessed again. Due to this, thin devices cannot claim discards will always zero data. Signed-off-by: Mike Snitzer <snitzer@redhat.com> Signed-off-by: Joe Thornber <ejt@redhat.com> Cc: stable@vger.kernel.org # 3.4+ Signed-off-by: Alasdair G Kergon <agk@redhat.com>
Diffstat (limited to 'drivers/md')
-rw-r--r--drivers/md/dm-thin.c2
1 files changed, 1 insertions, 1 deletions
diff --git a/drivers/md/dm-thin.c b/drivers/md/dm-thin.c
index af1fc3b2c2ad..d4209231069d 100644
--- a/drivers/md/dm-thin.c
+++ b/drivers/md/dm-thin.c
@@ -2274,6 +2274,7 @@ static int pool_ctr(struct dm_target *ti, unsigned argc, char **argv)
* thin devices' discard limits consistent).
*/
ti->discards_supported = true;
+ ti->discard_zeroes_data_unsupported = true;
}
ti->private = pt;
@@ -2745,7 +2746,6 @@ static void set_discard_limits(struct pool *pool, struct queue_limits *limits)
* boundary is not sent to this target.
*/
limits->discard_granularity = pool->sectors_per_block << SECTOR_SHIFT;
- limits->discard_zeroes_data = pool->pf.zero_new_blocks;
}
static void pool_io_hints(struct dm_target *ti, struct queue_limits *limits)