Alfredo Cambera
Posted on July 23, 2020
How to Map EBS Block Devices on Ubuntu 20.04 Instances
Attaching an EBS volume to an EC2 instance makes it available to store data. What can cause confusion is that the device name shown in the AWS console is not the same as the one generated by the Linux Kernel.
When I found out about this problem, the first thing I thought was:
Why on earth would AWS show an incorrect device name? This doesn't make any sense!
The short answer is: They don't.
EBS device mapping works just fine on Amazon Linux AMIs because it is supported and maintained by AWS. The problem is on Linux AMIs, like Ubuntu, as it’s created by third-party providers.
I dug deeper into this problem, and I found out that this issue had two reasons:
- The block device driver used on the Linux kernel: There are as many kernel versions in the AWS Marketplace as AMIs. This doesn't mean that the naming scheme is completely different from AMI to AMI. It means that you need to be aware of its variations to reliably manage EBS volumes.
- The virtualization technology used by the instance: AWS offers the Xen hypervisor and Nitro technologies to power AMI HVM instances. Depending on the instance type you choose for your instance you’ll use one or another.
- The device response time: Block devices can respond to discovery in a different order in subsequent instance starts, which causes the device name to change.
In this post, I'll share what I learned while troubleshooting this problem. Hopefully, it will help you avoid some headaches.
How this issue can affect you?
Working with EBS volumes that you can't reliably identify can generate some self-inflicted damage. Here are some possible scenarios:
- You could format the incorrect device.
- You could schedule EBS snapshots for the incorrect disk.
- Tools relying on the AWS API to identify disks can run into problems when working with disk volumes: Ansible, SaltStack, AWS SDK, etc.
Let's reproduce the issue
I’ll create two environments, one per virtualization technology: Environment A - Xen and Environment B - Nitro.
EC2 instance configuration | Environment A - Xen | Environment B - Nitro |
---|---|---|
Hostname | env-a | env-b |
AMI name | Ubuntu Server 20.04 LTS | Ubuntu Server 20.04 LTS |
Instance type | t2.micro | t3a.nano |
Virtualization technology | Xen | Nitro |
Region | us-east-1 | us-east-1 |
Tag (Name) | env-a | env-b |
EBS volume configuration | Environment A - Xen | Environment B - Nitro |
---|---|---|
Size | 10GiB | 10GiB |
Volume type | General Purpose SSD (gp2) | General Purpose SSD (gp2) |
Region | us-east-1 | us-east-1 |
Tag (Name) | env-a-disk | env-b-disk |
Then, I’ll use the EC2 Instance Metadata Service and the AWS Command Line Interface (awscli) to gather disk information from the AWS API.
Troubleshooting "Environment A - Xen"
To confirm you’re working on a Xen instance, run the following command. You should get the same output:
ubuntu@lab-a:~$ file /sys/hypervisor/uuid
/sys/hypervisor/uuid: ASCII text
To get volume id and the device name generated by AWS, we use the awscli with the device tag env-a-disk
as a parameter for the filter:
ubuntu@lab-a:~$ INSTANCE_ID=$(curl -s "http://169.254.169.254/2012-01-12/meta-data/instance-id")
ubuntu@lab-a:~$ aws ec2 describe-volumes \
--region us-east-1 \
--filters \
"Name=attachment.instance-id,Values=${INSTANCE_ID}" \
"Name=tag:Name,Values=env-a-disk" \
--query \
"Volumes[].Attachments[].[Device,VolumeId]" \
--output json
[
[
"/dev/sdb",
"vol-09550e13d86b13653"
]
]
If you try to access the disk using the device name generated by AWS, you’ll get the following error:
ubuntu@lab-a:~$ file /dev/sdb
/dev/sdb: cannot open `/dev/sdf' (No such file or directory)
Xen block device naming scheme has two parts:
- The Xen devices namespace:
/dev/xvd
. It’s used as a prefix for every block device. - The last letter of the device name provided by AWS. In this case, it will be b, from /
dev/sdb
.
The EBS volume name will be the result of joining /dev/xvd
and b
: /dev/xvdf. Let's check it out:
ubuntu@lab-a:~$ file /dev/xvdb
/dev/xvdb: block special (202/16)
ubuntu@lab-a:~$ sudo fdisk -l /dev/xvdb
Disk /dev/xvdb: 10 GiB, 10737418240 bytes, 20971520 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Now that you have successfully identified the disk, you’re ready to create a symlink to the device name provided by AWS:
ubuntu@lab-a:~$ sudo ln -s /dev/xvdf /dev/sdf
Troubleshooting "Environment B - Nitro"
To confirm you’re working on a Nitro instance, run the following command. You should get the same output:
ubuntu@lab-b:~$ file /sys/devices/virtual/dmi/id/board_asset_tag
/sys/devices/virtual/dmi/id/board_asset_tag: ASCII text
To match the volume ID with the device, you need to get the required information from the AWS API:
ubuntu@lab-b:~$ INSTANCE_ID=$(curl -s "http://169.254.169.254/2012-01-12/meta-data/instance-id")
ubuntu@lab-b:~$ aws ec2 describe-volumes \
--region us-east-1 \
--filters \
"Name=attachment.instance-id,Values=${INSTANCE_ID}" \
"Name=tag:Name,Values=env-b-disk" \
--query \
"Volumes[].Attachments[].[Device,VolumeId]" \
--output json
[
[
"/dev/sdb",
"vol-034377a89c29ab66f"
]
]
Let's check if the device is available:
ubuntu@lab-b:~$ file /dev/sdb
/dev/sdb: cannot open `/dev/sdb' (No such file or directory)
Nitro is much more advanced virtualization technology. As such, it exposes the volume ID of the device. I’ll use it to identify the EBS volume.
I'll use lsblk
to list the serials of all connected block devices:
ubuntu@lab-b:~$ lsblk -o NAME,SERIAL
NAME SERIAL
loop0
loop1
loop2
loop3
loop4
loop5
loop6
loop7
nvme1n1 vol034377a89c29ab66f
nvme0n1 vol02db41a568b8fe5cc
└─nvme0n1p1
As shown in the out of the awscli command, the volume ID of the disk is vol-034377a89c29ab66f. It matches nvme1n1. The device name generated by the kernel is: /dev/nvme1n1. Let's confirm it:
ubuntu@lab-b:~$ file /dev/nvme1n1
/dev/nvme1n1: block special (259/0)
ubuntu@lab-b:~$ sudo fdisk -l /dev/nvme1n1
Disk /dev/nvme1n1: 10 GiB, 10737418240 bytes, 20971520 sectors
Disk model: Amazon Elastic Block Store
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Common scenarios
Automatically mount a device
The recommended way of referring to disks for automatic mount is to use the Universally unique identifier (UUID). It is unique for each formatted disk, and it doesn't change in time.
The following code formats the EBS volume that I had previously identified in the lab-a
box and uses the resulting UUID to configure the /etc/fstab
file to automatically mount the device on each reboot:
ubuntu@lab-a$ sudo mkfs -t xfs /dev/xvdb
ubuntu@lab-a$ sudo lsblk -o +UUID |grep xvdb
xvdb 202:16 0 10G 0 disk 544ce0a9-ae55-4e03-b58a-eeacf2b6445f
ubuntu@lab-a$ mkdir /disk
ubuntu@lab-a$ echo "UUID=544ce0a9-ae55-4e03-b58a-eeacf2b6445f /disk xfs defaults,nofail 0 2" >> /etc/fstab
I ran the following command after restarting the box to confirm that everything was working:
ubuntu@lab-a$ mount |grep disk
/dev/xvdb on /disk type xfs (rw,relatime,attr2,inode64,logbufs=8,logbsize=32k,noquota)
Working with tools required the device name match the one provided by AWS
You can create a symlink to the device as a workaround for tools not allowing to refer disks using the UUID:
ubuntu@lab-a$ sudo ln -s /dev/xvdb /dev/sdb
ubuntu@lab-a$ ls -al /dev/sdb
lrwxrwxrwx 1 root root 9 Jul 22 19:36 /dev/sdb -> /dev/xvdb
My recommendation is to delete this symlink afterward to avoid any confusion in the future.
Conclusion
Hopefully, this article helped you gain a better understanding of how to use EBS volumes with Ubuntu. I used commands present of most Linux distributions, so if you are using any other distro this should work for you.
If you are interested, here is a link to a highly recommended article on the evolution of the virtualization technology in Amazon: AWS EC2 Virtualization 2017: Introducing Nitro by Brendan Gregg.
Posted on July 23, 2020
Join Our Newsletter. No Spam, Only the good stuff.
Sign up to receive the latest update from our blog.