logo

LVM mirror + migrate

In this small tutorial we will go over building a basic mirror and migrate that mirror to a new system if need be. This was done using Alpine Linux v3.10 but should be transferrable with any system using LVM >=v2.0.

Useful information was found from tldp.org

Build

Initial system has 3 hard drives (/dev/vd{a,b,c}). The system will be configured using the Logic Volume Manager (LVM) as such,

Your system may report drives as hdX or sdX, my example uses vdX

The main OS will be installed on vda while the mirror will be built using vdb and vdc.

Hard drive preparation

Install your flavour of linux on vda. Once installed, login as root and format vdb and vdc to use the same partition layout and sizes using the fdisk command. Issue the following commands (or use sudo if desired)

# fdisk -l /dev/vdb
Disk /dev/vdb: 100 MB, 104857600 bytes, 204800 sectors
100 cylinders, 64 heads, 32 sectors/track
Units: sectors of 1 * 512 = 512 byte

Command (m for help): n
Partition type
   p   primary partition (1-4)
   e   extended
p
Partition number (1-4):
Value is out of range
Partition number (1-4): 1
First sector (32-204799, default 32):
Using default value 32
Last sector or +size{,K,M,G,T} (32-204799, default 204799):
Using default value 204799

Command (m for help):
Selected partition 1
Hex code (type L to list codes): 8e
Changed system type of partition 1 to 8e (Linux LVM)

Command (m for help): w
The partition table has been altered.
Calling ioctl() to re-read partition table
#

Do the same steps for the second hardrive. In the end you will have two drives with one partition each, e.g., /dev/vdb1 and /dev/vdc1.

It is important to note that for this mirror both hard drives should be the same size. In this example they are 100 MB.

LVM mirror creation

Now that we have extra drives in the system we want to create a mirror but in order to do that we need to tell LVM what physical drives exist to be managed as physical volumes,

# pvcreate /dev/vdb1 /dev/vdc1
Physical volume "/dev/vdb1" successfully created.
Physical volume "/dev/vdc1" successfully created.

Now you can now use the physical volumes to create a volume group. A volume group manages the data of the physical volumes depending on how it is configured. To create a volume group do the following,

# vgcreate data /dev/vdb1 /dev/vdc1
Volume group "data" successfully created
# vgdisplay
  --- Volume group ---
  VG Name               data
  System ID
  Format                lvm2
  Metadata Areas        2
  Metadata Sequence No  3
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                1
  Open LV               1
  Max PV                0
  Cur PV                2
  Act PV                2
  VG Size               192.00 MiB
  PE Size               4.00 MiB
  Total PE              48
  Alloc PE / Size       46 / 184.00 MiB
  Free  PE / Size       2 / 8.00 MiB
  VG UUID               s5LJNJ-6fsZ-CXcL-FZcU-hozf-cTfl-cPjKkW

Now that a volume group exists a mirrored logical volume can be created. I will try to use all of the available space shared between two drives (192 MiB/2 = 86 MiB),

# lvcreate -L 86M -m1 -n datamirror data
  Rounding up size to full physical extent 88.00 MiB
  Logical volume "datamirror" created.
# lvs
  LV         VG   Attr       LSize  Pool Origin Data%  Meta%  Move Log Cpy%Sync Conver
  datamirror data rwi-aor--- 88.00m                                    100.00
  lv_root    vg0  -wi-ao---- <3.65g
  lv_swap    vg0  -wi-ao----  1.25g

Checking what the logical volume information can be done with lvdisplay,

# lvdisplay data
  --- Logical volume ---
  LV Path                /dev/data/datamirror
  LV Name                datamirror
  VG Name                data
  LV UUID                kXK2nP-j9Yz-fUjU-WB0B-B5qN-qTN1-WqKBlw
  LV Write Access        read/write
  LV Creation host, time alpine, 2019-12-17 14:09:16 +0000
  LV Status              available
  # open                 1
  LV Size                88.00 MiB
  Current LE             22
  Mirrored volumes       2
  Segments               1
  Allocation             inheri
  Read ahead sectors     auto
  - currently set to     256
  Block device           252:6

Now that we have a mirror /dev/data/datamirror we can create a filesystem. In this case I used the ext4 filesystem

# mkfs.ext4 /dev/data/datamirror
mke2fs 1.45.2 (27-May-2019)
Creating filesystem with 90112 1k blocks and 22528 inodes
Filesystem UUID: a1c0d6b1-dbfa-4286-a8e6-3ab85d2c5f7f
Superblock backups stored on blocks:
        8193, 24577, 40961, 57345, 73729

Allocating group tables: done
Writing inode tables: done
Creating journal (4096 blocks): done
Writing superblocks and filesystem accounting information: done

From here you can mount it manually with

# mount /dev/data/datamirror /mnt

Or add it to /etc/fstab to do as you see fit.

Migrate mirror to new system

Now that we have a system let us re-add the system back to a new installation. Before we do let us grap the information of the old LVM setup

    # lvs
      LV         VG   Attr       LSize  Pool Origin Data%  Meta%  Move Log Cpy%Sync Conver
      datamirror data rwi-aor--- 88.00m                                    100.00
      lv_root    vg0  -wi-ao---- <3.65g
      lv_swap    vg0  -wi-ao----  1.25g
    # vgs
      VG   #PV #LV #SN Attr   VSize   VFree
      data   2   1   0 wz--n- 192.00m 8.00m
      vg0    1   2   0 wz--n-  <4.90g    0
    # pvs
      PV         VG   Fmt  Attr PSize  PFree
      /dev/vda2  vg0  lvm2 a--  <4.90g    0
      /dev/vdb1  data lvm2 a--  96.00m 4.00m
      /dev/vdc1  data lvm2 a--  96.00m 4.00m

Luckily all we need to do deactivate and export the volume group data. The logical volume mirror we created will be preserved!

# umount /mnt
# vgchange --activate n data
  0 logical volume(s) in volume group "data" now active
# vgexport data
  Volume group "data" successfully exported

Now power down the system and migrate the physcial drives to the new system. On the new system we can scan for physical volumes as they are still preserved on the drives,

Move drives to new system

Once the drives are put into the new system check to see if they are detected,

# pvscan
  PV /dev/vdb1    is in exported VG data [96.00 MiB / 4.00 MiB free]
  PV /dev/vdc1    is in exported VG data [96.00 MiB / 4.00 MiB free]
  PV /dev/vda2   VG vg0             lvm2 [<4.90 GiB / 0    free]
  Total: 3 [<5.09 GiB] / in use: 3 [<5.09 GiB] / in no VG: 0 [0   ]

If pvscan detects the drives and that they are exported volume groups successfully, we can then import them into the new system and activate them,

# vgimport data
  Volume group "data" successfully imported
# vgchange --activate y data
  1 logical volume(s) in volume group "data" now active

Now that the volume group data is merged in an activated we can mount as before or use /etc/fstab

# mount /dev/data/datamirror /mnt
# ls
doc         lost+found  src
data        fig         script

© 2017–2024 David Kalliecharan