Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[BUG]: Unable to upgrade if /boot/efi is setup as an mdadm RAID1 #134

Open
1 task done
ramereth opened this issue Mar 10, 2025 · 1 comment
Open
1 task done

[BUG]: Unable to upgrade if /boot/efi is setup as an mdadm RAID1 #134

ramereth opened this issue Mar 10, 2025 · 1 comment
Labels
bug Something isn't working

Comments

@ramereth
Copy link

Is there an existing issue for this?

  • I have searched the existing issues

Current Behavior

I was trying to upgrade an aarch64 node from AlmaLinux 8 to AlmaLinux 9 when I encountered an issue during the upgrade process in the booted initramfs.

While this was on aarch64, I believe this likely affects x86_64 or any other architecture. Here's the layout of the system and fstab for a reference:

Block devices:

NAME                      MAJ:MIN RM   SIZE RO TYPE  MOUNTPOINTS
nvme0n1                   259:0    0 894.3G  0 disk  
|-nvme0n1p1               259:1    0     1G  0 part  
| `-md127                   9:127  0  1022M  0 raid1 /boot
|-nvme0n1p2               259:2    0   600M  0 part  
| `-md125                   9:125  0 599.9M  0 raid1 /boot/efi
`-nvme0n1p3               259:3    0 892.7G  0 part  
  `-md126                   9:126  0 892.5G  0 raid1 
    |-almalinux_arm1-root 253:0    0 764.5G  0 lvm   /
    `-almalinux_arm1-swap 253:1    0   128G  0 lvm   [SWAP]
nvme1n1                   259:4    0 931.5G  0 disk  
|-nvme1n1p1               259:5    0     1G  0 part  
| `-md127                   9:127  0  1022M  0 raid1 /boot
|-nvme1n1p2               259:6    0   600M  0 part  
| `-md125                   9:125  0 599.9M  0 raid1 /boot/efi
`-nvme1n1p3               259:7    0 892.7G  0 part  
  `-md126                   9:126  0 892.5G  0 raid1 
    |-almalinux_arm1-root 253:0    0 764.5G  0 lvm   /
    `-almalinux_arm1-swap 253:1    0   128G  0 lvm   [SWAP]

fstab

/dev/mapper/almalinux_arm1-root /                       ext4    defaults        1 1
UUID=ef8fe825-99ef-4c01-aab1-96a46b642e82 /boot                   ext4    defaults        1 2
UUID=3D34-9948          /boot/efi               vfat    umask=0077,shortname=winnt 0 2
/dev/mapper/almalinux_arm1-swap none                    swap    defaults        0 0

Unfortunately I don't have any error output as I've already worked around it, however it's likely reproducible. The problem seems to be the fact that the systems doesn't know how to reinstall grub since it tries to use the RAID device instead of each disk.

I think this might be an issue with grub-install, however I know the AlmaLinux installer allowed this configuration (at least that's my recollection).

The work around was to:

  1. Break the RAID1 of /boot/efi and reconfigure the system to use one disk
  2. After the upgrade, rebuild the RAID1 again manually by making a copy of the contents of /boot/efi

Expected Behavior

I expected to leapp at least send me a warning before doing the upgrade with the suggested fix.

If you want, I'll see if I can replicate this inside of a VM to make sure it just wasn't some kind of one-off.

Steps To Reproduce

No response

Anything else?

No response

Search terms

mdadm efi

@ramereth ramereth added the bug Something isn't working label Mar 10, 2025
@yuravk
Copy link
Collaborator

yuravk commented Mar 11, 2025

This issue is currently being tracked by upstream in RHEL-58913

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

2 participants