Introduction
LVM, Logical Volume Management, is a versatile option to stripe file systems across multiple logical volumes. This concept pairs perfectly with a cloud ecosystem such as AWS where you can freely provision more disk space on-demand. Other primary advantages of utilizing LVM in your environment is flexibility and direct control of your data at the block device level, dynamically scale up or down volume sizes as requirements change, and you can assign meaningful names for readability. You can certainly pay for a third-party pre-configured, customized, and pre-hardened RHEL 8 AMI, however why not save the money and configure it yourself?
In this guide I will provide the necessary steps and commands to set the groundwork for a secure or hardened RHEL 8 system hosted in AWS using common system tools available from the RHUI repository.
Preliminary Notes
- Always create a backup before performing any of these commands. Create an EBS Snapshot or AMI of your instance if there is data you do not wish to lose.
- Guide is centered around a fresh RHEL 8 AMI, intended to be a baseline for your comapny’s Golden AMI.
- Ensure you are working in the same Availability Zone for all instances/volumes created, in my example here I am working in us-east-1a.
- All instances used are on t3.micro on the Nitro platform.
- Guide is written with the assumption of XFS file systems being used.
- AMI used in this guide is a PAYG RHEL 8.6 can be found in us-east-1 as “ami-06640050dc3f556bb” at the time of my testing, it is one of the default Quick Start AMIs from Red Hat that can also be found on the AWS Marketplace.
- Launched two instances for this guide, one is converted to LVM and will be the new “base LVM AMI”, the other is a recovery/test working instance. I would recommend using the same AMI for both instances for concurrency.
- This guide is prepared only using one LV group, this is still possible with multiple LVs but steps may differ. If planning on using multiple Logical Volumes (such as examplevg-root and examplevg-var), take down you may have to complete steps multiple times.
- Issues that can happen include having SELinux enabled and then attempting to SSH to the instance resulting in errors. Set SELinux to permissive/disabled if you have issues connecting.
Preparations
- First we check and verify that the base AMI for RHEL comes with GPT and GRUB2, which means they have a separate bootable BIOS partition compared to MBR/Legacy GRUB on Ubuntu as example. Launching the base AMI and running the following command will confirm this, omitted lines that are irrelevant for space saving with
...
- I needed to install the LVM management tools as well as XFSDump on this AMI, ensure you do this before continuing alongside your text editor of choice if you do not prefer vi:
1
# yum install lvm2 xfsdump -y
- Shutdown this base AMI, create a snapshot of the root volume, then make a new volume out of this snapshot in the same AZ. This leaves us with two identical root EBS volumes, detach the original from the instance (note the management console mount point) and I will simply tag these with the names “OriginalRoot” and the clone as “ClonedRoot”. ClonedRoot will be converted to an LVM root volume, OriginalRoot will be used to transfer data as well as our failback in case of issues going forward in this guide
- Attach the original root volume first to the recovery instance so we can identify which volume is which
- nvme0n1 is our recovery root and ignored, nvme1n1 is our original root. Attach the cloned volume where running lsblk again it will likely show up as nvme2n1. Note down these names for reference especially if they are different for you
Partitioning
- We will be using gdisk on this step to delete and re-create the primary, non-boot partition of ClonedRoot. Do not touch the boot partition (nvme2n1p1, in my case) here
- Delete the primary Linux partition
- Verify only the boot sector remains
1
# Command (? for help): p
- Create the new partition, press enter twice to use defaults for first and last sectors
- Enter the hex code for Linux LVM here, instead of 8300 for default Linux FS. Linux LVM is
8e00
- Write new GPT to the volume
LVM Group
- Now prepare LVM on this volume using the standard procedure, I use an example volume group name here
1 2 3 4 5 6 7 8 9 10 11 12 13
# pvcreate /dev/nvme2n1p2 WARNING: xfs signature detected on /dev/nvme2n1p2 at offset 0. Wipe it? [y/n]: y Wiping xfs signature on /dev/nvme2n1p2. Physical volume "/dev/nvme2n1p2" successfully created. # vgcreate vgexample1 /dev/nvme2n1p2 Volume group "vgexample1" successfully created # vgs VG #PV #LV #SN Attr VSize VFree vgexample1 1 0 0 wz--n- <10.00g <10.00g # lvcreate -n lvexample1 -L 9G vgexample1 Logical volume "lvexample1" created.
- Setting to 9G, 1G smaller than the actual volume size I am testing with 10G for wiggle room. I also like to use
-l 100%FREE
to fully utilize a disk which can be used instead of-L xG
Data Transfer
- Since we are working with XFS, xfsdump can be utilized to transfer data from OriginalRoot to ClonedRoot, alternatively could use rsync but this may take longer. First let’s get the volumes mounted
-
This final command automatically moves the data from original to cloned, modify as necessary if using multiple mount points as example and allow it to fully complete, this only took me about five minutes on a fresh AMI
-
If LVM subdirectories are created such as /home there will be warnings for XFS flags being ignored. I did not find any issue from creating these subdirections and the A flag on xfsdump can ignore this. I suspect it may have something to do with inodes pointing to a different partition, but the customer had not mentioned any problems aside from the warnings. See there references for more information on the topic
Prepare fstab & GRUB Config
- Prepare a chroot environment so we may edit the /etc/fstab for device dependency, also edit grub configurations
-
Root environment is now entered on the newly created LVM volume. First we’ll configure the fstab so the OS will properly find the root volumes for initial mount
-
Now configure GRUB for a LVM root volume
1
# vi /etc/default/grub
-
Modify this line as follows, appending with the format of
rd.lvm.lv=volumegroupname/logicalvolumename
-
Original line in my AMI followed by the modified one:
1 2 3
GRUB_CMDLINE_LINUX="console=ttyS0,115200n8 console=tty0 net.ifnames=0 rd.blacklist=nouveau nvme_core.io_timeout=4294967295 crashkernel=auto" --> GRUB_CMDLINE_LINUX="console=ttyS0,115200n8 console=tty0 net.ifnames=0 rd.blacklist=nouveau nvme_core.io_timeout=4294967295 crashkernel=auto rd.lvm.lv=lvexample1/vgexample1"
-
Note you must include
console=ttyS0,115200n8 console=tty0 net.ifnames=0
for RHEL systems hosted on AWS -
Check if the following file exists, if it does not skip this section entirely, if it does follow the next step. The RHEL 8.6 AMI used in this guide did not have this and it may be specific to RHEL 7 systems
-
Skip the next step if you recieve the same message above indicating the file is non-existent
-
Removing
dm, dm-raid, lvm
from the line of this file.
Reinstall Grub & initramfs
- Regenerate the initramfs and GRUB configurations
- Errors will show for the other block devices, ignore these and note if
nvme2n1/ClonedRoot
generates any errors - Verify the kernel envrionment config
1 2 3
# grub2-editenv list kernelopts=root=/dev/mapper/example1-lvexample1 ro console=ttyS0,115200n8 console=tty0 net.ifnames=0 rd.blacklist=nouveau nvme_core.io_timeout=4294967295 crashkernel=auto rd.lvm.lv=lvexample1/vgexample1 saved_entry=ec2af0878045375fa25a6483fd4e492f-4.18.0-372.32.1.el8_6.x86_64 boot_success=1
- In my case
root=/dev/mapper/vgexample1-lvexample1
has been auto generated, if it does not appear with this output append it to theGRUB_CMDLINE_LINUX
in /etc/default/grub in the previous section
Test launch the instance
- You can now detach the ClonedVolume and reattach as root to the original stopped instance. Start the instance, monitor the boot process for any errors (I recommend looking at the Serial Console) and verify that the LVM conversion has completed without issue. This instance is now LVM root backed and retains the PAYG billing model for RHEL in the AWS Marketplace, if you got the original AMI from there. This has been tested multiple times on RHEL 8.6 without any issues and these steps should theoretically work on RHEL 7, however RHEL 9 may require entirely new steps.