Logical Volume Manager (LVM) in Red Hat

Overview

Most modern Linux versions are compatible with LVM, an open source equivalent to Storage Spaces in the Microsoft world. Of course, many are aware of the traditional methods of partitioning a disk (Linux gparted or Microsft diskpart) and formatting that partition with a filesystem (NTFS, ReFS, exFAT, ZFS, xFS, etc.). LVM brings forth these advantages: on-demand RAID partitioning, flexible volume moving and resizing, snapshots, abstracted physical volumes management.   

LVM can be thought of as a software RAID with the baseline of Just a Bunch of Disks (JBOD) in conjunction with federation. Storage Space is carved from a set of available physical drives. Redundancy (deduplication) and/or parity-based resiliency are the seamless features of this technology.

The application of “thin provisioning” improves resource utilization and capacity planning. Physical hard drives can be removed from the pool, provided that sufficient disk space exists in the pool to copy the present contents elsewhere. Hence, failed or healthy drives can be added or removed with true plug and play if redundancy or parity based storage has been etched into the design.

Theories and semantics aside, let’s dive into a real-world example (use case) as shown in the table below:

Mount /home/ /backup/
Logical Volumes SHARE01 (RAID10) SHARE02 (RAID6)
Volume Groups VOLUMEGROUP01
Physical Volumes /dev/sdb /dev/sdc /dev/sdd /dev/sde

The drawing above represents the 3 ‘layers’ of volume abstractions and 1 layer of data access on a Linux system with LVM being applied. Items from any of these layers can be added or removed without affecting computer up time. Here is how to deploy this design.

Clear a new disk’s partitioning table by filling the first 512 bytes with zeros. This is effectively cleaning the Master Boot Record (MBR)

dd if=/dev/zero of=/dev/sdb bs=512 count=1

Explain:

# 'dd' :instream data & data sets
# 'if=/dev/zero' :in file location /dev/zero, which means copy from a volume that is filled with zeros
# 'of=/dev/sdb' :out file location /dev/sdb, which means second physical volume
# 'bs=512' :block size 512 [KB]
# 'count=1' :execute 1 pass

Define LVM Physical Volumes

pvcreate /dev/sdb

View existing LVM Volumes

pvscan -v

Output explain:

# PV :Physical Volume
# VG :Volume Group

Assign Physical Volume to Volume Group

vgcreate VOLUMEGROUP01 /dev/sdb

Explain:

# Create volume group named 'VOLUMEGROUP01' on 2nd physical volume

Add the other physical volumes onto the same volume group

vgextend VOLUMEGROUP01 /dev/sdc /dev/sdd /dev/sde

You can create, resize, and remove RAID10/RAID6 volumes in LVM, where striping is laid out across an array of disks. For large sequential reads and writes, creating a striped logical volume can improve the efficiency of the data I/O.

To create a RAID 10 logical volume, use the following form of the lvcreate command

lvcreate --type raid10 -m 1 -i 4 -L 100G -n SHARE01 VOLUMEGROUP01

Explain:

# create type=raid10 mirrors=1 stripes=4 size=100GB named='SHARE01' using VOLUMEGROUP01 pool

Format Logical Volume as XFS (Red Hat 7.5 default)

mkfs.xfs /dev/VOLUMEGROUP01/SHARE01

Mount the LV into /home directory – the trailing ‘/’ is necessary

mount /dev/VOLUMEGROUP01/SHARE01 /home/

# Check free space on ‘home’ folder

df -h /home/

Explain:

#'df -h /home/' :disk free, human readable, directory /home/
# Check for result such as "Mounted on /dev/VOLUMEGROUP01"

Check available space on VOLUMEGROUP01

vgdisplay VOLUMEGROUP01

Expand ‘home’ folder by 2GB and resize immediately

lvextend -L +2G /dev/VOLUMEGROUP01 -r

Extend LV ‘SHARE01’ by 100% of free space on VG ‘VOLUMEGROUP01’

lvextend -l +100%FREE /dev/VOLUMEGROUP01/SHARE01

LVM treats M or m as Megabytes, where each ascension level is by a multiplier of 1024 (instead of 1000). This is consistent with the storage industry’s standard of measurement.

Summary

Although the illustration above is hasn’t demonstrated some other advanced functionalities, these are some of the takeaway notes from a LVM implementation.

The Pros:

1. This is JBOD to the extreme! A mix of multi-speed hard drives with varying capacities will achieve read/write at their aggregated capabilities.

2. Deduplication is a must for mission critical data! LVM performs deduplication automatically when configured.

3. Storage Pool Tiering (not yet covered in this post) with these great features: Write Back Cache size, Tier Optimization, File pinning.

The Cons:

1. True RAID 10 cannot be achieved with a two way mirrored Storage Pool, even though that concept hasn’t been shown in the example above. Let’s just accept what I say as true because I’m good at blah blah without supporting evidence.

2. Parity Storage Pools will significantly affect performance.

3. Combining Hardware RAID with Storage Pools (software RAID) will result in… abadi… abadi… slow I/O performance. I’ll just say that putting RAID on RAID is whats-that-word… vacuous.

Posted on Categories Linux

Leave a Reply

Your email address will not be published. Required fields are marked *