---[ end Kernel panic - not syncing: VFS: Unable to mount root fs on unknown-block(0,0) ]---
Praveen Kumar K
Posted on July 24, 2024
`/initramfs-4.18.a-bbb.c.d.el8_10.x86_64.img' not found.
No filesystem could mount root, tried:
Kernel panic - not syncing: VFS: Unable to mount root fs on unknown-block(0,0)
CPU: 5 PID: 1 Comm: swapper/0 Not tainted 4.18.a-bbb.c.d.el8_10.x86_64 #1
---[ end Kernel panic - not syncing: VFS: Unable to mount root fs on unknown-block(0,0) ]---
To recover the affected instance, you can follow the below methods which are suitable for you:
=========
Method 1:
i. You can try restoring the instance from a recent working backup snapshot or AMI prior to the patch update.
=-=-=-=-=-=-=-=-=-==-=-=-=-=-=-=-=-=-==-=-=-=-=-=-=-=-=-==-=-=-=-=-=-
=========
Method 2:
You can follow the steps below and rebuild the kernel image using a rescue instance.
i. Take a backup of your instance [i-affectedInstanceID] by creating an EBS snapshot of the volume(s) attached to the instance.
Once you have ensured that you have a backup in place, kindly continue the remaining steps.
ii. Stop the current instance 'i-affectedInstanceID'.
Once you have ensured that you have a backup in place, kindly try the remaining steps.
iii. Launch a temporary recovery instance in the same availability zone as the instance 'i-affectedInstanceID' and using the same AMI [ami-affectedInstanceAMIID] or any other AMI with a similar operating system.
iv. Detach the root volume 'vol-affectedInstanceRootVolumeID' and attach it to the temporary recovery instance launched recently as a secondary EBS volume.
v. Connect to the recovery instance with your SSH key pair.
vi. Run the following command to see the name of the secondary volume, e.g., /dev/xvdf1, /dev/nvme0n1p1.
$ lsblk
vii. Switch to root and mount the volume.
$ sudo su
# mount /dev/nvme0n1p1 /mnt
NOTE:Replace nvme0n1p1
with the correct volume name.
viii. Mount the necessary filesystems from the secondary volume to the /mnt
directory.
# for i in proc sys dev run; do mount --bind /$i /mnt/$i ; done
ix. Change root to the mounted volume.
# chroot /mnt
x. To create a backup of the initramfs in the /
directory, run the following command:
# for file in /boot/initramfs-*.img; do cp "${file}" "/$(basename "$file")_$(date +%Y%m%d)"; done
xi. To list the default kernel, run the following command:
# grubby --default-kernel
xii. List the kernels and initramfs in the boot directory as shown in the following command:
# ls -lh /boot/vmlinuz* && ls -lh /boot/initr*
xiii. To rebuild the initramfs, run the following command. Update the kernel version field with the latest kernel version or the kernel version you want that you found in 'step xi':
# cd /boot
# dracut --force --verbose initramfs-<kernelVersion>.img <kernelVersion>
eg.,
# dracut --force initramfs-4.18.a-bbb.c.d.el8_10.x86_64.img 4.18.a-bbb.c.d.el8_10.x86_64
xiv. To determine if the instance is booting on UEFI or BIOS, run the following command:
# boot_mode=$(ls /sys/firmware/efi/efivars >/dev/null 2>&1 && echo "EFI" || echo "BIOS"); echo "Boot mode detected: $boot_mode"
xv. To update the grub configuration, choose one of the following commands based on the previous step output.
For BIOS, run the following command:
# grub2-mkconfig -o /boot/grub2/grub.cfg
For UEFI, run one of the following commands.
# grub2-mkconfig -o /boot/efi/EFI/redhat/grub.cfg
xvi. To exit and detach the volume, run the following command:
# exit;
# umount /mnt/{proc,sys,dev,run,}
# umount -fl /mnt
xvii. Detach the secondary volume from the recovery instance. Attach it to the original instance as the root device with the same device name from 'step v'. When the volume is attached, boot the instance.
xviii. Start the EC2 instance and then verify that the instance is responsive.
=-=-=-=-=-=-=-=-=-==-=-=-=-=-=-=-=-=-==-=-=-=-=-=-=-=-=-==-=-=-=-=-=-=-=-=-==-=-=-=-=-=-=-=-=-==-=-=-=-=-=-=-=-=-=
=========
Method 3: Revert back to previous any one of kernel.
You can follow the same steps outlined in Method 2, from 'step i' to 'step x', to launch a recovery instance and attach the primary volume of the affected instance as a secondary volume to the recovery instance.
xi. Execute the following command to see all available kernels:
# grubby --info=ALL
xii. Execute the following command to change the default kernel of the instance:
# grubby --set-default=/boot/vmlinuz-<kernelVersion>
xiii. Execute the following command to verify that the previous command successfully updated the default kernel or not:
# grubby --default-kernel
xiv. 15. To exit and detach the volume, run the following command:
# exit;
# umount /mnt/{proc,sys,dev,run,}
# umount -fl /mnt
xv. Detach the secondary volume from the recovery instance. Attach it to the original instance as the root device with the same device name from 'step v'. When the volume is attached, boot the instance.
xvi. Start the EC2 instance and then verify that the instance is responsive.
Posted on July 24, 2024
Join Our Newsletter. No Spam, Only the good stuff.
Sign up to receive the latest update from our blog.