Creating virtual machines with LXD

rimelek

Ákos Takács

Posted on July 29, 2023

Creating virtual machines with LXD

Intro

My original goal was to show you how you can create your home lab, but before we can start to automate everything using Ansible, we also have to learn how we can do it manually. In this post I will show you how you can create virtual machines with LXD in a way that really helps you to do it quickly even without Ansible.

Table of contents

Why LXD?

» Back to table of contents «

Even though I had some issues with LXD recently, it is still a great tool with a similar interface to Docker. When I say "similar", don't expect "the same"! LXD is not Docker but some concepts are the same. Docker is for creating light-weight containers with the possibility to add a simple virtual machine layer for better isolation (See Kata container runtime) and LXD is for running full-fledged Linux distributions inside containers, and it can also run KVM-based virtual machines using qemu.

You can also use cloud-init to automatically add users and SSH keys to all of your virtual machines, and of course cloud-init can also be used to install some packages when starting the virtual machine the first time. Many modern solutions either support LXD or are even based on it.

There are long-term supported versions (LTS) and their end of life date is basically the same as the end of life of the LTS Ubuntu versions.

Can you also say anything bad about LXD?

» Back to table of contents «

Well, some of you might not like snap packages and indeed I never recommend using snap to install Docker, but Docker is not developed by Canonical (the developers of Snap) and LXD is. Unless you want to build LXD from source, you need to use the snap package on Ubuntu, but some distributions provide other ways as well.

I was still not able to move virtual machines between LXD servers while the virtual machines were running and I can't say that I was completely satisfied by the error messages while I was working on a solution. If you want to know more about my issue, you can read about that on the Ubuntu forum:

Moving running LXD virtual machines from one server to another - Support - Ubuntu Community Hub

Hi. I’ve been using LXD for years on my laptop and recommending to my friends and colleagues and recently I started to have a pretty bad experience with LXD. It looks like I also picked the wrong time to join the community as the old forum is closed and this new category here is just “new” 🙂 but I hope someone can point me to the right direction. It is possible that most of my problems are because I just misunderstand the documentation and announcements and want to do something that...

favicon discourse.ubuntu.com

If that didn't scare you away you can still use it to easily create virtual machines on one server or even move them to other servers after you stopped them, and you can use the same interface to create containers.

Install LXD

» Back to table of contents «

First you will need a Linux operating system. I will use an Ubuntu 22.04 server that supports Snap. You can check if your operating system supports it: Install the daemon

You can also check the other installation methods provided by some distributions: Other installation options

Let's see the Snap way:



sudo snap install lxd --channel 5.15/stable


Enter fullscreen mode Exit fullscreen mode

The above command will install the currently latest stable version, however that is not the LTS version with Long-term support, so in a production system you would need to upgrade or reinstall it every month. That is not recommended.

To find out which is the LTS version before Canonical decided to move LXD out from the LinuxContainers project, you could check the releases category on linuxcontainers.org. However, since the last LTS versions were 3.0, 4.0 and 5.0, it is safe to say that the next LTS release will be 6.0.

To check the list of available channels and versions, you can use the following command:



snap info lxd


Enter fullscreen mode Exit fullscreen mode

Preparation before initialization

» Back to table of contents «

When you install LXD, it is just the Snap package without any configuration. Before you can start to run containers and virtual machines you need to configure some LXD networks, storages and profiles default parameters, so you won't have to configure every instance from scratch. Instead, you can just assign profiles to instances.

Even before that, we need to decide how we are going to do that. One of the first things you need to decide is what storage driver you want to use. ZFS is one of the recommended drivers and that is also the default. If you skip preparation entirely and just press enter at every step of the initialization you will have a ZFS storage pool, except that would use a virtual disk image file which is good for testing, but to have the best performance, you should use a dedicated physical disk or partition. I have two physical disks. An HDD at /dev/sda and an SSD at /dev/nvme0n1. I used the SSD for Windows and the HDD for Linux. Well, that's not a recommendation, just a fact. If you plan to use your Linux more frequently and for tasks that require faster disks, choose SSD. In the following output you will see how I partitioned my HDD which also shows how badly I designed it as I have only 15GiB for my root partition which is not enough when I install many snap packages which I need to have on my host and not in virtual machines. I can handle it, but it requires extra work.

The following output is a part of the output of lsblk



sda           8:0    0 465.8G  0 disk
├─sda1        8:1    0     1M  0 part
├─sda2        8:2    0    15G  0 part /
├─sda3        8:3    0     2G  0 part /boot
├─sda4        8:4    0    30G  0 part /home
├─sda5        8:5    0     3G  0 part /var/log
├─sda6        8:6    0   300G  0 part
└─sda7        8:7    0 115.8G  0 part /mnt/data


Enter fullscreen mode Exit fullscreen mode

sda6 is not mounted anywhere because that is what I use for the ZFS storage pool. If you have multiple disks or partitions you can also add more than one to the same ZFS storage pool, but as far as I know, LXD can initialize it with only one. Although I have only one, I will use a method that can be used with multiple disks as well. We need to create a ZFS storage pool even before initializing LXD. Because paths like /dev/sda6 can change at every boot, it is better to use a persistent name when we add the disk to the storage pool. If the name of the partition is sda6 as mine, you can use the following command to find persistent aliases for it:



disk=/dev/sda6
find -L /dev/disk/ -samefile "$disk"


Enter fullscreen mode Exit fullscreen mode

In my case the output is this:



/dev/disk/by-label/local
/dev/disk/by-uuid/2675692044005731665
/dev/disk/by-partuuid/af18d4b7-4bf0-44d9-92df-7321b1565ce5
/dev/disk/by-path/pci-0000:00:17.0-ata-5.0-part6
/dev/disk/by-path/pci-0000:00:17.0-ata-5-part6
/dev/disk/by-id/scsi-SATA_Samsung_SSD_850_S2RBNX0J103301N-part6
/dev/disk/by-id/scsi-0ATA_Samsung_SSD_850_S2RBNX0J103301N-part6
/dev/disk/by-id/scsi-1ATA_Samsung_SSD_850_EVO_500GB_S2RBNX0J103301N-part6
/dev/disk/by-id/ata-Samsung_SSD_850_EVO_500GB_S2RBNX0J103301N-part6
/dev/disk/by-id/wwn-0x5002538d41a17ca1-part6
/dev/disk/by-id/scsi-35002538d41a17ca1-part6


Enter fullscreen mode Exit fullscreen mode

Labeling disks could be very convenient, but it can also be dangerous if you choose a bad label. Like the one you can see in the first line which is so general you won't know what that means. I assume it was automatically created by LXD because the LXD storage pool (not ZFS pool) is called "local" when we install an LXD cluster not an individual LXD server. If I created it, I should be ashamed...

I will use /dev/disk/by-id/scsi-1ATA_Samsung_SSD_850_EVO_500GB_S2RBNX0J103301N-part6

Before we can start working with ZFS without LXD, we need to install zfsutils-linux (at least this is how it is called on Ubuntu 22.04)



sudo apt install zfsutils-linux


Enter fullscreen mode Exit fullscreen mode

It will install Systemd services and some management commands like zfs and zpool. Let's define the name of the ZFS pool and the list of the disks that you want to add:



name="lxd-default"
disks=(
  /dev/disk/by-id/scsi-1ATA_Samsung_SSD_850_EVO_500GB_S2RBNX0J103301N-part6
)


Enter fullscreen mode Exit fullscreen mode

If you want to add more disks, you can just add more lines in the bash array. You can also choose a different name. LXD would create it as "default", but I like to see what kind of default that pool is, so I use the lxd- prefix. The next step is creating the pool:



sudo zpool create "$name" "${disks[@]}"


Enter fullscreen mode Exit fullscreen mode

Check if you have done it correctly:



zpool list


Enter fullscreen mode Exit fullscreen mode

You should see something like this:



NAME          SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
lxd-default   298G   106K   298G        -         -     0%     0%  1.00x    ONLINE  -


Enter fullscreen mode Exit fullscreen mode

ZFS works with datasets. Before installing LXD, there will be only one:



zfs list


Enter fullscreen mode Exit fullscreen mode

Output:



NAME          USED  AVAIL     REFER  MOUNTPOINT
lxd-default   106K   289G       24K  /lxd-default


Enter fullscreen mode Exit fullscreen mode

That's it. ZFS can do a lot of things and work as a software RAID, which would require a lot more configuration. For now that simple pool will be enough for us.

Initializing / configuring LXD

» Back to table of contents «

You can initialize LXD interactively or non-interactively. In this post we will discuss mainly the interactive way. At the moment you can also find a short description about this in my tutorial called Learn Docker. I wrote that part to be able to compare LXD and Docker: Learn Docker: LXD. There is a pretty good description about the initialization steps in the documentation as well. which covers the latest version that can change in the future. To start the initialization for version 5.15, run the following command:



sudo lxd init


Enter fullscreen mode Exit fullscreen mode

First question:

Would you like to use LXD clustering? (yes/no) [default=no]:

In this case we don't want an LXD cluster, so just press enter to get the next question:

Do you want to configure a new storage pool? (yes/no) [default=yes]:

Well, in this case yes, we want. Note that this is not a ZFS pool, but a storage pool which will use a ZFS pool. The default answer is "yes" so just press enter again.

Name of the new storage pool [default=default]:

The default name will be fine, but you can also change it to any name you like. I will leave it as it is and press enter.

Name of the storage backend to use (zfs, btrfs, ceph, dir, lvm) [default=zfs]:

Again, the default storage driver is what I want so let's press enter again.

Create a new ZFS pool? (yes/no) [default=yes]:

Since we have already created a ZFS pool, it's time to type "no" (without the quotation marks) and press enter.

Name of the existing ZFS pool or dataset:

There is no default value. Since we named our pool as "lxd-default" let's type that and press enter.

Would you like to connect to a MAAS server? (yes/no) [default=no]:

MAAS is something I want to write about in the future, but now we don't have any, so press enter once again.

Would you like to create a new local network bridge? (yes/no) [default=yes]:

A local network bridge will be available only inside your local machine similar to how Docker networks work. You could also use LAN network or even VLANs, but that is something you should try after you have learnt to work with LXD locally.

What should the new bridge be called? [default=lxdbr0]:

Again, this is similar to the default Docker bridge which is "docker0". The default value, "lxdbr0" will be perfect so press enter.

What IPv4 address should be used? (CIDR subnet notation, “auto” or “none”) [default=auto]:

LXD will try to find unused subnets in your network to configure for the local bridge. Although using antivirus is less frequent on Linux workstations in case you are installing LXD on your Laptop, I found out that ESET antivirus blocks this process and the initialization fails. I can't give you a solution here, so if you know one, please share in the comment section. You could answer "no" and continue with the initialization before configuring the network manually, but there is no guarantee that the lxc commands (yes, lxc) will work properly either.

Let's assume you don't have antivirus, or it does not interfere with LXD and press enter without changing the default choice.

What IPv6 address should be used? (CIDR subnet notation, “auto” or “none”) [default=auto]:

IPv6 is optional and I usually disable it since I rarely need it locally and disabling it also helps me not to worry about the difficulties of IPv6 especially when I am learning about something else. If you want to disable it as I do, type "none" and press enter.

Would you like the LXD server to be available over the network? (yes/no) [default=no]:

The default value is "no" which is fine, since you usually want to log in to the server, or you are already logged in to your laptop, and you can use the local socket. It also seems more stable so if you don't want to manage your LXD server remotely, just press enter and enable it anytime you need it later.

Would you like stale cached images to be updated automatically? (yes/no) [default=yes]:

LXD can automatically update already downloaded images. This is something I always change to "no", but you are free to choose "yes". I disable it because I had bad experience with automatically updated images before, although it was not with LXD but LibVirt. Nevertheless, I don't like to be surprised by new bugs when I repeat the installation process frequently in my home lab, so I choose "no" even though I know if I don't update my images, it could be a security risk, but I can update the images manually before creating a new virtual machine and I can also upgrade the packages after starting the VM. If you want to follow my way, type "no" and press enter.

Would you like a YAML "lxd init" preseed to be printed? (yes/no) [default=no]:

This is the last step. We already configured everything we could this way, and now we can type "yes" and press enter to see the generated configuration which we can save and reload later when we want to reinstall LXD non-interactively.

My output was this:



config:
  images.auto_update_interval: "0"
networks:
- config:
    ipv4.address: auto
    ipv6.address: none
  description: ""
  name: lxdbr0
  type: ""
  project: default
storage_pools:
- config:
    source: lxd-default
  description: ""
  name: default
  driver: zfs
profiles:
- config: {}
  description: ""
  devices:
    eth0:
      name: eth0
      network: lxdbr0
      type: nic
    root:
      path: /
      pool: default
      type: disk
  name: default
projects: []
cluster: null


Enter fullscreen mode Exit fullscreen mode

Save this file anywhere you like. I will just leave it in my home in this case at $HOME/lxd-init.yml

Now if you run zfs list again, you will see more datasets under "lxd-default".



NAME                                   USED  AVAIL     REFER  MOUNTPOINT
lxd-default                            603K   289G       24K  legacy
lxd-default/buckets                     24K   289G       24K  legacy
lxd-default/containers                  24K   289G       24K  legacy
lxd-default/custom                      24K   289G       24K  legacy
lxd-default/deleted                    144K   289G       24K  legacy
lxd-default/deleted/buckets             24K   289G       24K  legacy
lxd-default/deleted/containers          24K   289G       24K  legacy
lxd-default/deleted/custom              24K   289G       24K  legacy
lxd-default/deleted/images              24K   289G       24K  legacy
lxd-default/deleted/virtual-machines    24K   289G       24K  legacy
lxd-default/images                      24K   289G       24K  legacy
lxd-default/virtual-machines            24K   289G       24K  legacy


Enter fullscreen mode Exit fullscreen mode

Non-interactive initialization

You can skip this step, but I want to leave this here for you in case you want to reinstall LXD.

Remove LXD

» Back to table of contents «

Before you can try to reinitialize LXD, let's remove it first. You just run the following command to remove it but also save it in the Snap cache:



sudo snap remove lxd


Enter fullscreen mode Exit fullscreen mode

Since leftover snap packages can take a lot of space, I usually like to "purge" the package without caching:



sudo snap remove lxd --purge


Enter fullscreen mode Exit fullscreen mode

If you forget to add --purge, you can list the packages in the cache:



snap saved


Enter fullscreen mode Exit fullscreen mode

Output



Set  Snap  Age    Version       Rev    Size    Notes
8    lxd   12.4s  5.15-3fe7435  25086  56.3kB  auto


Enter fullscreen mode Exit fullscreen mode

And use the "Set" number to "forget" it. In my case the number is "8".



sudo snap forget 8


Enter fullscreen mode Exit fullscreen mode

Use the previously saved config to initialize LXD

» Back to table of contents «

Now we can reinstall LXD:



sudo snap install lxd --channel 5.15/stable


Enter fullscreen mode Exit fullscreen mode

and reinitialize it non-interactively:



lxd init --preseed <$HOME/lxd-init.yml


Enter fullscreen mode Exit fullscreen mode

This command can fail if the disk still have the ZFS filesystem:



Error: Failed to create storage pool "default": Provided ZFS pool (or dataset) isn't empty, run "sudo zfs list -r lxd-default" to see existing entries


Enter fullscreen mode Exit fullscreen mode

In order to fix this, you need to remove the subsets in the zfs pool using zfs destroy lxd-default/<dataset name> or you can remove the pool and recreate it:



sudo zpool destroy lxd-default
sudo zpool create "$name" "${disks[@]}"


Enter fullscreen mode Exit fullscreen mode

See previous sections for the value of the variables.
Now you can run the init command again:



lxd init --preseed <$HOME/lxd-init.yml


Enter fullscreen mode Exit fullscreen mode

Let's see what we have now

» Back to table of contents «

If you try to list instances (containers and virtual machines)



lxc list


Enter fullscreen mode Exit fullscreen mode

you will get the following output:



To start your first container, try: lxc launch ubuntu:22.04
Or for a virtual machine: lxc launch ubuntu:22.04 --vm

+------+-------+------+------+------+-----------+
| NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS |
+------+-------+------+------+------+-----------+


Enter fullscreen mode Exit fullscreen mode

As you can see LXD helps you to start your first VM, although it will be shown only once. There are just a few parameters because profiles contain some defaults. If you checked the generated init config carefully, you have already seen the default profile in it, but you can also get it later.



lxc profile list


Enter fullscreen mode Exit fullscreen mode

Output:



+---------+---------------------+---------+
|  NAME   |     DESCRIPTION     | USED BY |
+---------+---------------------+---------+
| default | Default LXD profile | 0       |
+---------+---------------------+---------+


Enter fullscreen mode Exit fullscreen mode

Or get the content of the default profile:



lxc profile show default


Enter fullscreen mode Exit fullscreen mode

Output:



config: {}
description: Default LXD profile
devices:
  eth0:
    name: eth0
    network: lxdbr0
    type: nic
  root:
    path: /
    pool: default
    type: disk
name: default
used_by: []


Enter fullscreen mode Exit fullscreen mode

You have a default network and a root disk for the operating system from the default LXD storage pool. The "show" and "list" subcommands work for these objects as well:



lxc network list
lxc network show lxdbr0


Enter fullscreen mode Exit fullscreen mode

or



lxc storage list
lxc storage show default


Enter fullscreen mode Exit fullscreen mode

But you could also edit them with the "edit" subcommand or create a new object with the "create" subcommand. For more details, please check the help from command line like this:



lxc storage create --help


Enter fullscreen mode Exit fullscreen mode

or visit the official documentation.

And finally I want to show you the command to list remote repositories which are similar to Docker registries if you know that concept, however, a remote can also be another LXD server which you want to control from a management server.



lxc remote list


Enter fullscreen mode Exit fullscreen mode

Output



+-----------------+------------------------------------------+---------------+-------------+--------+--------+--------+
|      NAME       |                   URL                    |   PROTOCOL    |  AUTH TYPE  | PUBLIC | STATIC | GLOBAL |
+-----------------+------------------------------------------+---------------+-------------+--------+--------+--------+
| images          | https://images.linuxcontainers.org       | simplestreams | none        | YES    | NO     | NO     |
+-----------------+------------------------------------------+---------------+-------------+--------+--------+--------+
| local (current) | unix://                                  | lxd           | file access | NO     | YES    | NO     |
+-----------------+------------------------------------------+---------------+-------------+--------+--------+--------+
| ubuntu          | https://cloud-images.ubuntu.com/releases | simplestreams | none        | YES    | YES    | NO     |
+-----------------+------------------------------------------+---------------+-------------+--------+--------+--------+
| ubuntu-daily    | https://cloud-images.ubuntu.com/daily    | simplestreams | none        | YES    | YES    | NO     |
+-----------------+------------------------------------------+---------------+-------------+--------+--------+--------+


Enter fullscreen mode Exit fullscreen mode

First virtual machine

» Back to table of contents «

The easiest way of creating virtual machines was shown by the first lxc list command, but that would create the VM with a random name as docker run creates containers without using the --name option, so I want to set the name as well passing it as the second argument after the image reference:



lxc launch --vm ubuntu:22.04 ubuntu-jammy-server


Enter fullscreen mode Exit fullscreen mode

That would work, but I also want to override the CPU and memory limits and also the size of the root disk. If you started it already, you can remove it with the following command:



lxc delete ubuntu-jammy-server --force


Enter fullscreen mode Exit fullscreen mode

Now the limits:



lxc launch --vm ubuntu:22.04 ubuntu-jammy-server \
  --config limits.memory=2GiB \
  --config limits.cpu=2 \
  --device root,size=20GiB


Enter fullscreen mode Exit fullscreen mode

I believe the memory and cpu limits are obvious. The last line sets the size of the root disk which is 10GiB by default, but we changed it to 20GiB. If you start to run the following command quickly:



lxc exec ubuntu-jammy-server lsblk


Enter fullscreen mode Exit fullscreen mode

it will first fail because of the LXD agent is still not running:



Error: LXD VM agent isn't currently running


Enter fullscreen mode Exit fullscreen mode

or show you a smaller root partition:



sda       8:0    0    20G  0 disk
├─sda1    8:1    0   2.1G  0 part /
├─sda14   8:14   0     4M  0 part
└─sda15   8:15   0   106M  0 part /boot/efi


Enter fullscreen mode Exit fullscreen mode

because it takes time to resize the partition after the initialization automatically. Eventually you should see something like this:



sda       8:0    0    20G  0 disk
├─sda1    8:1    0  19.9G  0 part /
├─sda14   8:14   0     4M  0 part
└─sda15   8:15   0   106M  0 part /boot/efi


Enter fullscreen mode Exit fullscreen mode

Note: If you want the root partition to be 20GiB and not the disk, you need to add the sizes of the other partitions to the size of the root disk in the lxc launch command.

If you run lxc list quickly instead of the above-mentioned lsblk, it can produce different outputs until the VM fully starts.



+---------------------+---------+------+------+-----------------+-----------+
|        NAME         |  STATE  | IPV4 | IPV6 |      TYPE       | SNAPSHOTS |
+---------------------+---------+------+------+-----------------+-----------+
| ubuntu-jammy-server | RUNNING |      |      | VIRTUAL-MACHINE | 0         |
+---------------------+---------+------+------+-----------------+-----------+


Enter fullscreen mode Exit fullscreen mode

or



+---------------------+---------+------------------------+------+-----------------+-----------+
|        NAME         |  STATE  |          IPV4          | IPV6 |      TYPE       | SNAPSHOTS |
+---------------------+---------+------------------------+------+-----------------+-----------+
| ubuntu-jammy-server | RUNNING | 10.177.180.83 (enp5s0) |      | VIRTUAL-MACHINE | 0         |
+---------------------+---------+------------------------+------+-----------------+-----------+


Enter fullscreen mode Exit fullscreen mode

The IP address can appear as "IP (eth0)", disappear and reappear as "IP (enp5s0)" or similar. This is because during boot Ubuntu changes the name of the network interface. After that, the LXD agent process starts in the virtual machine, and you can run a bash inside similarly to docker exec, but this is not a surprise now as we already used lxc exec above. The difference is that we have an interactive terminal by default.



lxc exec ubuntu-jammy-server bash


Enter fullscreen mode Exit fullscreen mode

Although the above command would work, sometimes it is better to define the commands after a double dash so the flags of the executable would not be passed to lxc



lxc exec ubuntu-jammy-server -- bash --help


Enter fullscreen mode Exit fullscreen mode

Overcommitting and how to avoid that

» Back to table of contents «

Even though this post is not for production systems, there is one thing that I feel I have to mention. There is a concept called "overcommitting". It means if you add up the amount of memory, CPUs or the size of root disks you assigned to each virtual machine, you get more than you physically have. Sometimes it could be useful for example when you know that each virtual machine will need 10 gigabytes of memory occasionally, but never at the same time. As you could see we actually used limits.memory and limits.cpu meaning that these are just limits, but don't guarantee that the required CPU resources are available at the moment. When you check the amount of resources in the virtual machine you will still see that as if you had what you requested. To make sure overcommitting doesn't happen you need to do the calculation to find out the amount of safely assignable resources before you create the new virtual machine or before change the configuration later. To add up all the memory you assigned to LXD virtual machines you can run the following command and get a number in gibibytes:



lxc list --format json \
  | jq -r '.[].config."limits.memory"' \
  | numfmt --suffix B --from iec-i --to-unit=1073741824 \
  | cut -dB -f1 \
  | paste -s -d+ - \
  | bc


Enter fullscreen mode Exit fullscreen mode

or



lxc list --format json \
  | jq -r '.[].config."limits.memory"' \
  | numfmt --suffix B --from iec-i --to-unit=1073741824 \
  | cut -dB -f1 \
  | jq -s add


Enter fullscreen mode Exit fullscreen mode

If the above commands didn't work for you, it could be because they require the jq command and numfmt as part of coreutils.

Now that you know that, doing the same with the CPU resources will be much easier due to the lack of units as it is just a number:



lxc list --format json \
  | jq -r '.[].config."limits.cpu"' \
  | paste -s -d+ - \
  | bc


Enter fullscreen mode Exit fullscreen mode

or



lxc list --format json \
  | jq -r '.[].config."limits.cpu"' \
  | jq -s add


Enter fullscreen mode Exit fullscreen mode

Let's do it with the root disks:



lxc list --format json \
  | jq -r '.[].devices.root | select(.type == "disk") | .size ' \
  | numfmt --suffix B --from iec-i --to-unit=1073741824 \
  | cut -dB -f1 \
  | paste -s -d+ - \
  | bc


Enter fullscreen mode Exit fullscreen mode

or



lxc list --format json \
  | jq -r '.[].devices.root | select(.type == "disk") | .size ' \
  | numfmt --suffix B --from iec-i --to-unit=1073741824 \
  | cut -dB -f1 \
  | jq -s add


Enter fullscreen mode Exit fullscreen mode

Unfortunately this works only if you defined the resource limits in the instance config and not in a profile. Not to mention when you didn't define it anywhere or when you have multiple projects (like namespaces in Kubernetes) or multiple storage pools. So to really avoid overcommitting you need to do more than I could show you in this post.

Use profiles to install similar virtual machines easily

» Back to table of contents «

Up until now everything we did was based on the default profile. When we wanted more memory, cpus or a bigger root disk, we used the --config or --device options to override the default values. Let's use profiles to make it easier. I like Docker, so I will create a docker-host profile, but first I need volumes (new virtual disks) for /var/lib/docker in the virtual machine.



pool_name="default"
volume_name="docker-01-docker-data-root"
size="50GiB"

lxc storage volume create --type "block" "$pool_name" "$volume_name" "size=$size"


Enter fullscreen mode Exit fullscreen mode

The volume name could be actually anything, but I chose the format of <vm_name>-<device_name>, so when I list the volumes, I will know which volume is used by which docker host.

Now repeat the creation with docker-02:



pool_name="default"
volume_name="docker-02-docker-data-root"
size="50GiB"

lxc storage volume create --type "block" "$pool_name" "$volume_name" "size=$size"


Enter fullscreen mode Exit fullscreen mode

To list the volumes, run the following command:



pool_name="default"
lxc storage volume list "$pool_name" type=custom


Enter fullscreen mode Exit fullscreen mode

Output:



+--------+----------------------------+-------------+--------------+---------+
|  TYPE  |            NAME            | DESCRIPTION | CONTENT-TYPE | USED BY |
+--------+----------------------------+-------------+--------------+---------+
| custom | docker-01-docker-data-root |             | block        | 0       |
+--------+----------------------------+-------------+--------------+---------+
| custom | docker-02-docker-data-root |             | block        | 0       |
+--------+----------------------------+-------------+--------------+---------+


Enter fullscreen mode Exit fullscreen mode

We also need a new profile named as "docker-host"



lxc profile create docker-host


Enter fullscreen mode Exit fullscreen mode

This will be an empty profile, but we can change it soon. Save the following content in a file named as lxd-profile-docker-host.yml:



config:
  boot.autostart: "false"
  limits.cpu: "4"
  limits.memory: 4GiB
  cloud-init.user-data: |
    #cloud-config
    users:
      - name: manager
        lock_passwd: false
        groups: sudo
        shell: /bin/bash
        passwd: "$6$5PNw7C5RTRxcv96e$CfGSFJZ/y4vMtO.wAs.fE59fXGJT.65.rEnjNnYgNa5axrvHS8B0X53sMaoCeoCOb9PuZMnzbBaZkvodzwH/s0"

    runcmd:
      - apt-get update
      - apt-get install -y jq
      - |
        /bin/bash -c '
          disk_path=$(lsblk --json --output-all | jq --raw-output ".blockdevices[] | select(.serial == \"lxd_docker--data--root\") | .path");
          e2label "$disk_path" || mkfs.ext4 "$disk_path";
          e2label "$disk_path" docker-data-root;
          echo "LABEL=docker-data-root  /var/lib/docker  ext4  defaults  0  0" >> /etc/fstab;
          mkdir /var/lib/docker
          mount "LABEL=docker-data-root";
        '
      # see: https://docs.docker.com/engine/install/ubuntu/
      - sudo apt-get update
      - sudo apt-get install -y ca-certificates curl gnupg
      - sudo install -m 0755 -d /etc/apt/keyrings
      - curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg
      - sudo chmod a+r /etc/apt/keyrings/docker.gpg
      - |
        echo \
          "deb [arch="$(dpkg --print-architecture)" signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu \
          "$(. /etc/os-release && echo "$VERSION_CODENAME")" stable" \
          | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
      - sudo apt-get update
      - |
        /bin/bash -c '
          VERSION_STRING="5:24.0.4-1~ubuntu.22.04~jammy" \
          && sudo apt-get install -y \
              docker-ce=$VERSION_STRING \
              docker-ce-cli=$VERSION_STRING \
              containerd.io \
              docker-buildx-plugin \
              docker-compose-plugin \
          && sudo apt-mark hold docker-ce docker-ce-cli
        '
description: Docker Host
devices:
  root:
    path: /
    pool: default
    size: 20GiB
    type: disk
  docker-data-root:
    pool: default
    type: disk
    source: this_is_a_placeholder__keep_it_here


Enter fullscreen mode Exit fullscreen mode

This profile will work only with Ubuntu 22.04 instances, but you can modify it to remove all the references to the version number like the held package versions. Let's update the docker-host profile:



lxc profile edit docker-host <lxd-profile-docker-host.yml


Enter fullscreen mode Exit fullscreen mode

At the beginning, we have memory and cpu resources and also an autostart setting which is similar to Docker's restart policy. I set it to "false", so when I start my host, the virtual machine will not start automatically:



config:
  boot.autostart: "false"
  limits.cpu: "4"
  limits.memory: 4GiB


Enter fullscreen mode Exit fullscreen mode

Then we use cloud-init to create a user and run commands at first boot. The value of cloud-init.user-data is a string. That's why we use a pipe character after the parameter name. This is a yaml featue, not LXD and not Cloud init.



cloud-init.user-data: |
  #cloud-config


Enter fullscreen mode Exit fullscreen mode

Now let's see the user creation:



users:
  - name: manager
    lock_passwd: false
    groups: sudo
    shell: /bin/bash
    passwd: "$6$5PNw7C5RTRxcv96e$CfGSFJZ/y4vMtO.wAs.fE59fXGJT.65.rEnjNnYgNa5axrvHS8B0X53sMaoCeoCOb9PuZMnzbBaZkvodzwH/s0"


Enter fullscreen mode Exit fullscreen mode

The value of passwd is a password hash which I generated with the following command:



openssl passwd -6


Enter fullscreen mode Exit fullscreen mode

I got a prompt and set the password to be "password". The output was the hash which I added to the user definition quoted

Then we use cloud-init's runcmd which will run at first boot. The value of runcmd will be a list. Sometimes we have to use special syntax, because not everything is valid in this yaml list that would be valid in a shell script. For example for some reason I don't know yet, environment variables can't be defined unless you run everything directly as an argument of a shell. This could have a similar reason to what I mentioned in another post about Docker's CMD, ENTRYPOINT and SHELL instructions.

That's why you can see this syntax in the list:



- |
  /bin/bash -c '
    disk_path=$(lsblk --json --output-all | jq --raw-output ".blockdevices[] | select(.serial == \"lxd_docker--data--root\") | .path");
    e2label "$disk_path" || mkfs.ext4 "$disk_path";
    e2label "$disk_path" docker-data-root;
    echo "LABEL=docker-data-root  /var/lib/docker  ext4  defaults  0  0" >> /etc/fstab;
    mkdir /var/lib/docker
    mount "LABEL=docker-data-root";
  '


Enter fullscreen mode Exit fullscreen mode

I use a similar method where I install Docker:



- |
  /bin/bash -c '
    VERSION_STRING="5:24.0.4-1~ubuntu.22.04~jammy" \
    && sudo apt-get install -y \
         docker-ce=$VERSION_STRING \
         docker-ce-cli=$VERSION_STRING \
         containerd.io \
         docker-buildx-plugin \
         docker-compose-plugin \
    && sudo apt-mark hold docker-ce docker-ce-cli
  '


Enter fullscreen mode Exit fullscreen mode

This is where you could change the version Docker, or you can just remove the version completely if you always want the latest:



- |
  sudo apt-get install -y \
    docker-ce \
    docker-ce-cli \
    containerd.io \
    docker-buildx-plugin \
    docker-compose-plugin \
  && sudo apt-mark hold docker-ce docker-ce-cli


Enter fullscreen mode Exit fullscreen mode

Both solutions will make sure you don't upgrade Docker accidentally which is very important in production.

In the devices section, we also set the size of the root disk and define a non-existent volume just as a placeholder to have a valid profile, but we will need to override the volume name when we create the virtual machine. It helps us to have fewer arguments in the command line and also make sure we don't forget to add a disk for the docker data since it will throw an error if we leave the non-existent volume in the definition without overriding it from command line.

Note: This trick can also be problematic if you want to add the profile to an existing virtual machine, because then you can't override the value before adding the profile, but you can't add the profile because the volume does not exist. If this is a problem for you, you can just remove that docker-data-root section and add the device manually later.

Now create the virtual machine:



lxc launch ubuntu:22.04 docker-01 --vm \
  --profile default \
  --profile docker-host \
  --device docker-data-root,pool=default \
  --device docker-data-root,type=disk \
  --device docker-data-root,source=docker-01-docker-data-root 


Enter fullscreen mode Exit fullscreen mode

Optionally you can see the boot process:



lxc console docker-01


Enter fullscreen mode Exit fullscreen mode

You may need to press enter to get the login prompt after the installation, but the virtual machine doesn't have a user with a password, so it will just show you if the installation process was finished.

In order to exit the console, press CTRL+a and after that press 'q'.

Now you can repeat the same commands with docker-02 for which you need to mount docker-02-docker-data-root as docker data root.

Checking the installed virtual machines

» Back to table of contents «

It's time to confirm whether the installation was successful. I will only do it with docker-01 the process is the same with any VM.

We know we wanted to mount a new disk to /var/lib/docker. The df command can show us what disk was mounted to a specific folder, so let's run the following commands:



lxc exec docker-01 -- df -h /var/lib/docker


Enter fullscreen mode Exit fullscreen mode

The output should be only one line (and the header):



Filesystem      Size  Used Avail Use% Mounted on
/dev/sdb         49G  280K   47G   1% /var/lib/docker


Enter fullscreen mode Exit fullscreen mode

Check if the size is what you expected. To be honest, I did not expect the size to be just 49G since I specifically asked for 50GiB. You can also see that the available size is 47G, so the difference is much more than the size of the used disk space which is 280K. The fact is that zfs needs some space for itself. Assuming you named everything I did in this tutorial, run the following command to get some information of the dataset of the volume:



zfs list lxd-default/custom/default_docker-01-docker-data-root


Enter fullscreen mode Exit fullscreen mode

Output



NAME                                                    USED  AVAIL     REFER  MOUNTPOINT
lxd-default/custom/default_docker-01-docker-data-root  1.04G   260G     1.04G  -


Enter fullscreen mode Exit fullscreen mode

Here you can see 1.04G is used on the volume, but it is not on the filesystem, so it could explain why we got only 49G and not 50G. Although I'm still not sure why is the difference between the disk size and the available size. Again, this is something that you need to consider when you define the size of the volume.

You also want to know if Docker was installed successfully.



lxc exec docker-01 docker version


Enter fullscreen mode Exit fullscreen mode

If it works, you finally have a virtual machine with Docker inside on a dedicated volume installed automatically. You can use the same profile to add more virtual machines like this

Conclusion

» Back to table of contents «

So even though I explained some LXD-related issues in this post, profiles can make virtual machine creation easier and the interface is user-friendly. As of LXD 5.14, Canonical also provide a web-based GUI, so you can install desktop operating systems and get a web console with graphical interface.

GitHub logo canonical / lxd-ui

Easy and accessible container and virtual machine management. A browser interface for LXD

LXD-UI

LXD-UI is a browser frontend for LXD. It enables easy and accessible container and virtual machine management Targets small and large scale private clouds.

Install

  1. Get the LXD snap

    sudo snap install lxd
    

    Or refresh to ensure at least version 5.21 LTS is installed. Be aware, that downgrading to a previous channel will not be possible.

    sudo snap refresh --channel=latest/stable lxd
    
  2. Make sure that your LXD server is exposed to the network. For example listen on port 8443 of all available interfaces:

    lxc config set core.https_address :8443
    
  3. Done. Access the UI in your browser by entering the server address (for example on localhost, https://127.0.0.1:8443). You can find more information on the UI in the LXD documentation.

Contributing

You might want to:

Architecture

LXD-UI…

At the time of writing this post Canonical released a preview version of LXD 5.16, so you can also install that and fall back to the version mentioned in this post only if the instructions don't work with the new version.

Have you been using LXD for a while, and you have a solution for some of the issues I mention? I would love to hear from you in the comment section! :) Are you a beginner, and you still don't understand something in the post? The comment section is yours too so sharing your opinions and questions you can help me create better tutorials in the future.

💖 💪 🙅 🚩
rimelek
Ákos Takács

Posted on July 29, 2023

Join Our Newsletter. No Spam, Only the good stuff.

Sign up to receive the latest update from our blog.

Related

Creating virtual machines with LXD
tutorial Creating virtual machines with LXD

July 29, 2023