Migrating Between Major Rocky Linux Versions Using Multiple Nodes
Introduction
Frequently, businesses will want to migrate between major versions of their Linux distribution of choice in order to stay ahead of security threats, take advantage of improved hardware compatibility in a newer kernel, and other reasons for why you would want to move to a newer OS version.
This article will go over how to migrate configurations, system settings, users and more from an older system to a newer system.
⚠️ WARNING The Rocky Linux project does not support in-place upgrades between major versions of Rocky Linux. This is also CIQ's stance and we recommend migrating between two or more nodes or in the case of a single node, backing up the node's contents and performing a fresh install.
⚠️ WARNING Once you complete the migration, CIQ recommends performing a human review of the migrated configurations before applying them to the new system. The reason for this is that there are thousands of changes between major Rocky Linux versions, including removed and added packages, as well as version jumps of core system components such as kernel
and gcc
. Incompatible packages and system configurations will happen, which is why a manual review is required.
⚠️ WARNING The playbooks linked here have been tested on two Azure Rocky Linux VMs, as well as two baremetal servers. The testing included a migration from Rocky Linux 8 to 9 and also from 8 to 10. The playbooks are not meant to be comprehensive, and CIQ recommends expanding and modifying the playbooks to suit your environment.
Problem
You are moving between major Rocky Linux versions. You have two or more nodes. One node is running a newer version of Rocky Linux, while the other node is running an older version of Rocky Linux, which is either experiencing hardware failure or the OS reaching end of life. You want to migrate users
, systemd
services, cron jobs
, and more to the newer system, which can be a tedious and drawn-out process when done manually.
Resolution
Prerequisites
-
Two nodes - one running an older major version of Rocky Linux and the other running a newer version.
-
A separate machine configured as the Controller, responsible for orchestrating the migration between the two Rocky Linux nodes using Ansible.
Ansible setup
Install Ansible if you have not already done so on a separate machine that is not the Source Node or Target Node. This machine will be the Controller and will orchestrate the migration between the two nodes. We recommend using pip
to install Ansible:
# Install python3 and pip if not already available:
sudo dnf install -y python3 python3-pip
# Install Ansible via pip for your user:
pip3 install --user ansible
# Install system-wide if required:
sudo pip3 install ansible
- Ensure the required Ansible Galaxy collections are installed:
ansible-galaxy collection install -r requirements.yml
Migration playbook setup
-
Place all configuration files available here into the same directory.
-
Please go through and edit any files to better match your environment. The files that contain
<>
characters require input. An example set of playbooks has been included in the above tarball to help you. -
A key file is the
inventory.ini
file with the Source Node and Target Node IP addresses and user (ideallyroot
or a user withsudo
privileges) that you want to login with.
Migration testing
- Copy your SSH public key to both Nodes:
ssh-copy-id -i ~/.ssh/id_rsa.pub <USERNAME>@<SOURCE_NODE_IP>
ssh-copy-id -i ~/.ssh/id_rsa.pub <USERNAME>@<TARGET_NODE_IP>
- Check that both nodes can be reached:
ansible all -i inventory.ini -m ping
- You should see a similar result to the following output:
howard@ciq-rocky-linux10:~/ansible-baremetal-testing$ ansible all -i inventory.ini -m ping
source | SUCCESS => {
"changed": false,
"ping": "pong"
}
target | SUCCESS => {
"changed": false,
"ping": "pong"
}
- If the ping checks fail, ensure that
python3
is installed on all Nodes, that public and private ssh keys are available, and that the details underinventory.ini
are correct. If going from an older version of Rocky Linux such as Rocky Linux 8.10, install a later version ofpython
from thepowertools
repository:
sudo dnf config-manager --set-enabled powertools
sudo dnf install -y python39
-
The version of
python3
used has to be later thanpython3.6
, which is installed by default on Rocky Linux 8.10 unless a newer version ofpython
is installed. -
Run the
preflight-check.yaml
playbook and a total of6
checks should report back as healthy regarding disk space availability, network connectivity, system information, and more :
ansible-playbook -i inventory.ini preflight-check.yml
Migrating the data
- Start the migration between the Nodes:
ansible-playbook -i inventory.ini migrate-rocky.yml
-
After the final phase of
Phase 5 - Cleanup SSH access
is finished, the migration will be complete. -
On the Target Node, a full report of the migration can be found under
/root/MIGRATION_COMPLETE_<UNIX_TIME>.txt
. -
Under the chosen user account's home directory of the Target Node (in this case
/root/migration_review
), you'll find migrated firewall, network, services andssh
configurations ported over. -
For
/tmp/filtered_packages.txt
on the Target Node, this file has a list of the core system packages that came from the Source Node. Further exports such as security, users and system configurations can be found in/tmp/system_export/
Human review
Please review the data that has been migrated over to the new node and work with your System Administrators to safely apply as many of the configurations as possible, without creating additional incompatibility errors.
References & related articles
Learning Ansible with Rocky Series - Rocky Linux Documentation
Rocky supported version upgrades - Rocky Linux Documentation