This post is mostly for my own reference. I am posting it here to possibly aid someone else in the future.

This post is based on a very good article from the Linux Gazette that has been a very good point of reference for me:

That article does a very good job of clarifying the concepts behind various device mapper usage scenarios. That article is very ambitious- it covers a lot of material in a short space. The one usage scenario that I found difficult to grasp is when working with the snapshot-origin target. Using the above article as I guide, in this post I will attempt to filter the steps necessary to work with the snapshot-origin target only; I will ignore simple snapshots and encrypted snapshots. By the way, the basic motivation for using snapshot-origin target is to obtain consistent backups of live filesystems.

My test environment:

Motherboard: Intel SAI2 w/ 2 x 1.4G P3 CPUS
Disk controller: Silicon Image, Inc. SiI 3114 (not in RAID mode)
Physical disks: 2 x 1 TB disks (sda, sdb)
RAID Mode:partition based RAID 1 using software raid provided by Linux kernel
Distribution:Slackware 13.1
Linux kernel: (my own config)
Disk filesystem:xfs

Here is the partition table for sdb (sda is identical):

Disk /dev/sdb: 1000.2 GB, 1000204886016 bytes
255 heads, 63 sectors/track, 121601 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0xe...

   Device Boot      Start         End      Blocks   Id  System
/dev/sdb1               1        1306    10490413+  fd  Linux raid autodetect
/dev/sdb2            1307        2612    10490445   83  Linux
/dev/sdb3            2613        3135     4200997+  fd  Linux raid autodetect
/dev/sdb4            3136      121601   951578145    5  Extended
/dev/sdb5            3136        3658     4200966   83  Linux
/dev/sdb6            3659        5226    12594928+  83  Linux
/dev/sdb7            5227      121601   934782156   fd  Linux raid autodetect
For reference, here is the relevent RAID info:
# cat /proc/mdstat
md2 : active raid1 sdb7[0] sda7[1]
      934782080 blocks [2/2] [UU]

For this problem I will document some of the steps to prepare a backup of /dev/md2 using a snapshot-origin object using a COW snapshot allocated on /dev/sdb6. Hopefully, that is enough information to make the problem concrete enough to follow along.

Step 0: I am not using LVM; I am manipulating device mapper objects directly. To use this technique, I need to create a device mapper object representing the RAID device. This is necessary to be able to suspend the device using the dmsetup commands, etc.

When the device is being prepared for use, setup the object before mounting it.

# echo 0 $(blockdev --getsize /dev/md2) linear /dev/md2 0 | dmsetup create md2

Also, create a duplicate device:

# dmsetup table md2 | dmsetup create md2_dup

Mount it:

# mount /dev/mapper/md2 /mnt/md2


  • One of the things that the duplicate device does is hold the original contents of the device mapper table data for the intended target. One reason that it is necessary to save the contents is to provide the data for restoring the table when we finish. The kernel also requires a duplicate table when working with snapshot-origin devices.
  • Modify /etc/rc.d/rc.local to create the device mapper objects automatically at each boot. As far as I know, these devices are not persistent.Modify fstab to point to /dev/mapper/md2 if applicable.
  • Step 1: Prepare COW snapshot and finally activate it.

    # dd if=/dev/zero of=/dev/sdb6 bs=512 count=32
    # dmsetup suspend md2
    # echo 0 $(blockdev --getsize /dev/mapper/md2_dup) snapshot /dev/mapper/md2_dup /dev/sdb6 p 32 |dmsetup create cow
    # echo 0 $(blockdev --getsize /dev/mapper/md2_dup) snapshot-origin /dev/mapper/md2_dup | dmsetup create origin
    # dmsetup table origin | dmsetup load md2
    # dmsetup resume md2


  • You need both a snapshot and a snapshot-origin device.
  • Step 2: Mount COW Snapshot.

    # mount -o nouuid -o ro /dev/mapper/cow /mnt/md2_snap


  • XFS will refuse to mount without the nouuid option.
  • This example uses a readonly mount for a backup. It is also possible to "fork" the filesystem if the COW is mounted read/write. I have not tested that mode.
  • Step 3: Use the backup method of your choice to perform a backup of the frozen snapshot mounted on /mnt/md2_snap. Normal file operations will continue on "live" filesystem mounted on /mnt/md2. The specific backup method is not shown here.

    Step 4: Obtain miscellaneous statistics while snapshot is in effect.

    # dmsetup table md2
    0 1869564160 snapshot-origin 253:1
    # dmsetup table md2_dup
    0 1869564160 linear 9:2 0
    # dmsetup status cow
    0 1869564160 snapshot 6496/25189857 64


  • Keep an eye on the usage of the cow device. If the snapshot gets full, then I think your snapshot will not be complete, or may have other inconsistancies.
  • There will be some performance degradation while the snapshot is in effect, depending on what operations are necessary on the live mount. I anticipate that adding new files to the live mount is the biggest performance hit, potentially cutting write performance by more than half. As I understand the operation of the COW device it works at the disk block level (i.e. not at the filesystem level). A write operation must first read blocks about to overwritten and write them to the COW, then write the new data to the specified blocks.
  • Step 5: Tear down the snapshot and restore to initial table to device.

    # dmsetup suspend md2
    # dmsetup remove origin
    # dmsetup remove cow
    # dmsetup table md2_dup | dmsetup load md2
    # dmsetup resume md2

    Step 6: When ready to do another snapshot backup, jump back to Step 1.

    Final Remarks

  • Disclaimer: Ensure all commands are applicable to your system before proceeding. Adapt commands as necessary for your system. Use at your own risk!
  • Disclaimer: Ensure that your "live" filesystems and the data being backed is compatible with this method. I am not certain this technique is entirely foolproof when capturing certain live data (SQL databases, transactional systems, etc.) I am interested in hearing about known cases where extra caution is warranted.
  • In this example, I am working with a data volume (i.e. not the root filesystem.) To set this up properly to work with the root filesystem, more tricks are probably required inside the initrd environment. I performed a quick test from my own startup environment which verified that the root filesystem works on a device mapper object. However, in the general case, the user may be better advised to switch to using LVM instead of this raw target because LVM is supported by the standard Slackware initrd environment.
  • As far as I know, one layer of snapshots are allowed. I don't think it is possible to have "a snapshot based on a snapshot," but I could be wrong.
  • I have a few GNU/Linux articles on my blog here.
  • Page Last Modified: 2010-12-10