Introduction


LVM, Logical Volume Management, is a versatile option to stripe file systems across multiple logical volumes. This concept pairs perfectly with a cloud ecosystem such as AWS where you can freely provision more disk space on-demand. Other primary advantages of utilizing LVM in your environment is flexibility and direct control of your data at the block device level, dynamically scale up or down volume sizes as requirements change, and you can assign meaningful names for readability. You can certainly pay for a third-party pre-configured, customized, and pre-hardened RHEL 8 AMI, however why not save the money and configure it yourself?

In this guide I will provide the necessary steps and commands to set the groundwork for a secure or hardened RHEL 8 system hosted in AWS using common system tools available from the RHUI repository.


Preliminary Notes


  • Always create a backup before performing any of these commands. Create an EBS Snapshot or AMI of your instance if there is data you do not wish to lose.
  • Guide is centered around a fresh RHEL 8 AMI, intended to be a baseline for your comapny’s Golden AMI.
  • Ensure you are working in the same Availability Zone for all instances/volumes created, in my example here I am working in us-east-1a.
  • All instances used are on t3.micro on the Nitro platform.
  • Guide is written with the assumption of XFS file systems being used.
  • AMI used in this guide is a PAYG RHEL 8.6 can be found in us-east-1 as “ami-06640050dc3f556bb” at the time of my testing, it is one of the default Quick Start AMIs from Red Hat that can also be found on the AWS Marketplace.
  • Launched two instances for this guide, one is converted to LVM and will be the new “base LVM AMI”, the other is a recovery/test working instance. I would recommend using the same AMI for both instances for concurrency.
  • This guide is prepared only using one LV group, this is still possible with multiple LVs but steps may differ. If planning on using multiple Logical Volumes (such as examplevg-root and examplevg-var), take down you may have to complete steps multiple times.
  • Issues that can happen include having SELinux enabled and then attempting to SSH to the instance resulting in errors. Set SELinux to permissive/disabled if you have issues connecting.

Preparations


  1. First we check and verify that the base AMI for RHEL comes with GPT and GRUB2, which means they have a separate bootable BIOS partition compared to MBR/Legacy GRUB on Ubuntu as example. Launching the base AMI and running the following command will confirm this, omitted lines that are irrelevant for space saving with ...
1
2
3
4
5
6
7
# fdisk -l /dev/nvme0n1
    ...
    Disklabel type: gpt
    ...
    Device         Start      End  Sectors Size Type
    /dev/nvme0n1p1  2048     4095     2048   1M BIOS boot
    /dev/nvme0n1p2  4096 20971486 20967391  10G Linux filesystem
  • I needed to install the LVM management tools as well as XFSDump on this AMI, ensure you do this before continuing alongside your text editor of choice if you do not prefer vi:
    1
    
    # yum install lvm2 xfsdump -y
  1. Shutdown this base AMI, create a snapshot of the root volume, then make a new volume out of this snapshot in the same AZ. This leaves us with two identical root EBS volumes, detach the original from the instance (note the management console mount point) and I will simply tag these with the names “OriginalRoot” and the clone as “ClonedRoot”. ClonedRoot will be converted to an LVM root volume, OriginalRoot will be used to transfer data as well as our failback in case of issues going forward in this guide
  • Attach the original root volume first to the recovery instance so we can identify which volume is which
    1
    2
    3
    4
    5
    6
    7
    
            NAME          MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
            nvme0n1       259:0    0   8G  0 disk
            ├─nvme0n1p1   259:1    0   8G  0 part /
            └─nvme0n1p128 259:2    0   1M  0 part
            nvme1n1       259:3    0  10G  0 disk
            ├─nvme1n1p1   259:4    0   1M  0 part
            └─nvme1n1p2   259:5    0  10G  0 part
  • nvme0n1 is our recovery root and ignored, nvme1n1 is our original root. Attach the cloned volume where running lsblk again it will likely show up as nvme2n1. Note down these names for reference especially if they are different for you

Partitioning


  1. We will be using gdisk on this step to delete and re-create the primary, non-boot partition of ClonedRoot. Do not touch the boot partition (nvme2n1p1, in my case) here
    1
    2
    3
    4
    5
    6
    7
    
        # gdisk /dev/nvme2n1
            Command (? for help): p
            Disk /dev/nvme2n1: 20971520 sectors, 10.0 GiB
            ...
            Number  Start (sector)    End (sector)  Size       Code  Name
               1            2048            4095   1024.0 KiB  EF02
               2            4096        20971486   10.0 GiB    8300
  • Delete the primary Linux partition
    1
    2
    
    # Command (? for help): d
    # 2
  • Verify only the boot sector remains
    1
    
    # Command (? for help): p
  • Create the new partition, press enter twice to use defaults for first and last sectors
    1
    2
    
    # Command (? for help): n
    # 2
  • Enter the hex code for Linux LVM here, instead of 8300 for default Linux FS. Linux LVM is 8e00
    1
    2
    3
    
        Current type is 'Linux filesystem'
        Hex code or GUID (L to show codes, Enter = 8300): 8e00
        Changed type of partition to 'Linux LVM'
  • Write new GPT to the volume
    1
    2
    3
    4
    5
    6
    
        # Command (? for help): w
            Final checks complete. About to write GPT data. THIS WILL OVERWRITE EXISTING
            PARTITIONS!!
        Do you want to proceed? (Y/N): y
            OK; writing new GUID partition table (GPT) to /dev/nvme2n1.
            The operation has completed successfully.

LVM Group


  1. Now prepare LVM on this volume using the standard procedure, I use an example volume group name here
     1
     2
     3
     4
     5
     6
     7
     8
     9
    10
    11
    12
    13
    
        # pvcreate /dev/nvme2n1p2
            WARNING: xfs signature detected on /dev/nvme2n1p2 at offset 0. Wipe it? [y/n]: y
              Wiping xfs signature on /dev/nvme2n1p2.
              Physical volume "/dev/nvme2n1p2" successfully created.
    
        # vgcreate vgexample1 /dev/nvme2n1p2
              Volume group "vgexample1" successfully created
        # vgs
              VG       #PV #LV #SN Attr   VSize   VFree
              vgexample1   1   0   0 wz--n- <10.00g <10.00g
    
        # lvcreate -n lvexample1 -L 9G vgexample1
              Logical volume "lvexample1" created.
  • Setting to 9G, 1G smaller than the actual volume size I am testing with 10G for wiggle room. I also like to use -l 100%FREE to fully utilize a disk which can be used instead of-L xG
    1
    2
    3
    4
    5
    6
    7
    
        # lvs
              LV        VG       Attr       LSize Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
              lvexample1 vgexample1 -wi-a----- 9.00g
    
        # mkfs -t xfs /dev/vgexample1/lvexample1
        # lsblk -f | grep vgexample
              └─vgexample1-lvexample1 xfs               c5b05f13-3f07-443e-ab2d-8282f47d4a7d;

Data Transfer


  1. Since we are working with XFS, xfsdump can be utilized to transfer data from OriginalRoot to ClonedRoot, alternatively could use rsync but this may take longer. First let’s get the volumes mounted
    1
    2
    3
    4
    
        # mkdir /orig /lvm
        # mount /dev/nvme1n1p2 /orig -o nouuid
        # mount /dev/mapper/vgexample1-lvexample1 /lvm
        # xfsdump -JA - /orig | xfsrestore -J - /lvm

Prepare fstab & GRUB Config


  1. Prepare a chroot environment so we may edit the /etc/fstab for device dependency, also edit grub configurations
    1
    2
    
        # for i in dev proc sys run; do mount -o bind /$i /lvm/$i; done
        # chroot /lvm
  • Root environment is now entered on the newly created LVM volume. First we’ll configure the fstab so the OS will properly find the root volumes for initial mount

    1
    2
    
        # vi /etc/fstab
             /dev/mapper/vgexample1-lvexample1 /   xfs     defaults   0 0

  • Now configure GRUB for a LVM root volume

    1
    
        # vi /etc/default/grub

  • Modify this line as follows, appending with the format of rd.lvm.lv=volumegroupname/logicalvolumename

  • Original line in my AMI followed by the modified one:

    1
    2
    3
    
            GRUB_CMDLINE_LINUX="console=ttyS0,115200n8 console=tty0 net.ifnames=0 rd.blacklist=nouveau nvme_core.io_timeout=4294967295 crashkernel=auto"
            -->
            GRUB_CMDLINE_LINUX="console=ttyS0,115200n8 console=tty0 net.ifnames=0 rd.blacklist=nouveau nvme_core.io_timeout=4294967295 crashkernel=auto rd.lvm.lv=lvexample1/vgexample1"

  • Note you must include console=ttyS0,115200n8 console=tty0 net.ifnames=0 for RHEL systems hosted on AWS

  • Check if the following file exists, if it does not skip this section entirely, if it does follow the next step. The RHEL 8.6 AMI used in this guide did not have this and it may be specific to RHEL 7 systems

    1
    2
    
            # cat /etc/dracut.conf.d/ec2.conf
            cat: /etc/dracut.conf.d/ec2.conf: No such file or directory

  • Skip the next step if you recieve the same message above indicating the file is non-existent

    1
    2
    3
    4
    
            # vi cat /etc/dracut.conf.d/ec2.conf
            omit_dracutmodules+="dm dmraid i18n plymouth crypt lvm mdraid qemu terminfo kernel-modules"
            -->
            omit_dracutmodules+="i18n plymouth cryp mdraid qemu terminfo kernel-modules"

  • Removing dm, dm-raid, lvm from the line of this file.


Reinstall Grub & initramfs


  1. Regenerate the initramfs and GRUB configurations
    1
    2
    
        # dracut --regenerate-all -f -vvvv
        # grub2-mkconfig -o /boot/grub2/grub.cfg
  • Errors will show for the other block devices, ignore these and note if nvme2n1/ClonedRoot generates any errors
    1
    2
    3
    
        # grub2-install --modules 'part_gpt part_msdos lvm' /dev/nvme2n1
            Installing for i386-pc platform.
            Installation finished. No error reported.
  • Verify the kernel envrionment config
    1
    2
    3
    
         # grub2-editenv list
            kernelopts=root=/dev/mapper/example1-lvexample1 ro console=ttyS0,115200n8 console=tty0 net.ifnames=0 rd.blacklist=nouveau nvme_core.io_timeout=4294967295 crashkernel=auto rd.lvm.lv=lvexample1/vgexample1 saved_entry=ec2af0878045375fa25a6483fd4e492f-4.18.0-372.32.1.el8_6.x86_64
            boot_success=1
  • In my case root=/dev/mapper/vgexample1-lvexample1 has been auto generated, if it does not appear with this output append it to the GRUB_CMDLINE_LINUX in /etc/default/grub in the previous section
    1
    2
    3
    
        # exit
        # for i in dev proc sys run; do sudo umount /lvm/$i -l; done
        # umount /lvm /orig

Test launch the instance


  • You can now detach the ClonedVolume and reattach as root to the original stopped instance. Start the instance, monitor the boot process for any errors (I recommend looking at the Serial Console) and verify that the LVM conversion has completed without issue. This instance is now LVM root backed and retains the PAYG billing model for RHEL in the AWS Marketplace, if you got the original AMI from there. This has been tested multiple times on RHEL 8.6 without any issues and these steps should theoretically work on RHEL 7, however RHEL 9 may require entirely new steps.