LVM snapshot performance

The Linux Logical Volume Manager (LVM) supports creating snapshots of logical volumes (LV) using the device mapper. Device mapper implements snapshots using a copy on write system, so whenever you write to either the source LV or the new snapshot LV, a copy is made first.

So a write to a normal LV is just a write, but a write to a snapshotted LV (or an LV snapshot) involves reading the original data, writing it elsewhere and then writing some metadata about it all.

This quite obviously impacts performance, and due to device mapper having a very basic implementation, it is particularly bad.  My tests show synchronous sequential writes to a snapshotted LV are around 90% slower than writes to a normal LV.


Once copied and written, writes to the same chunk are only 15% slower.  Reads are super fast, only a 5% speed impact.

Still, not many usage patterns involve huge full speed sequential writes to a filesystem, so LVM is still useful in most circumstances.

I did some tests to see how writes to one snapshotted LV impacted the performance of writes to a completely separate normal LV. Does a snapshotted LV ruin the performance of all your other LVs? Yes, especially if you’re using the cfq disk scheduler. Switching to the deadline scheduler made things considerably better for the normal LV (but slowed writes to the snapshotted LV a little further).

I did these tests on a 12 disk hardware RAID10 system. The test is a synthetic benchmark so I urge you to do your own tests, but it’s safe to say that device mapper does not implement clever snapshotting like btrfs or zfs – don’t expect great performance from it.

Improving LVM Snapshot performance

There are a few ways to improve performance of LVM snapshots.  The most obvious one is the chunk size, which can be tweaked when creating the snapshot.  This controls the size of the data that will be copied and written on write operations.  The best setting will depend on lots of stuff, such as your RAID stripe size and your usage patterns.

There is an as-yet uncommitted patch that improves snapshot write performance a bit by being a bit clever about the disk queuing, but it’s still slow.

Also, device mapper supports non-persistent snapshots (i.e: lost after reboot), which should avoid having to write the change metadata to disk (which will save a lot of seeks and writes) but LVM doesn’t seem to support creating these yet.

Putting the snapshot device on a separate disk would help too – I’m not sure it’s possible with LVM, but device mapper does support it.

Comments

Also note that regular (ie. buffered) writes to a cache-cold page in a file will also incur a read and 1 or more writes unless an entire page is over-written.

Also note that regular (ie. buffered, non O_SYNC) writes can be deferred for 60 or 90 seconds or whatever it is set to.

Is the write slowdown the speed of submitting the writes or waiting for it all to hit disk?

[…] built-in support for compression and very fast snapshots — LVM snapshots are known to cause serious degradation to write performance once enabled, which ZFS […]

Joe Thornber says:

We’ve got a new device-mapper target that does thin provisioning and ‘clever snapshotting like btrfs’. It will hopefully be in the next upstream linux release. Performance is a lot better, particularly with large numbers of snapshots of a volume.

john says:

Joe, I’m assuming you’re referring to dm-thin. Looks great!

For anyone who’s interested, documentation on dm-thin is available here: https://github.com/jthornber/linux-2.6/blob/thin-stable/Documentation/device-mapper/thin-provisioning.txt

someone says:

so, how long will it take for lvm to support dm-thin?

Leave a Reply