Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Purging LXD snap does not properly cleanup storage pool #14918

Open
simondeziel opened this issue Feb 3, 2025 · 2 comments
Open

Purging LXD snap does not properly cleanup storage pool #14918

simondeziel opened this issue Feb 3, 2025 · 2 comments

Comments

@simondeziel
Copy link
Member

h2. ZFS issue

It seems that upon purging the LXD snap (latest/edge), the default zpool is not exported thus preventing the use of the default zpool name.

Reproducer:

# snap install lxd --channel latest/edge
lxd (edge) git-b98b242 from Canonical✓ installed
# lxd init --auto --storage-backend=zfs
# snap remove --purge lxd
2025-02-03T22:16:34Z INFO Waiting for "snap.lxd.daemon.service" to stop.
lxd removed
# snap install lxd --channel latest/edge
lxd (edge) git-b98b242 from Canonical✓ installed
# lxd init --auto --storage-backend=zfs
Error: Failed to create storage pool "default": Failed to run: zpool create -m none -O compression=on default /var/snap/lxd/common/lxd/disks/default.img: exit status 1 (cannot create 'default': pool already exists)

After the failed lxd init, we can see that the default zpool is is still active, exporting it works around the issue:

# LD_LIBRARY_PATH=/snap/lxd/current/lib/:/snap/lxd/current/lib/x86_64-linux-gnu/:/snap/lxd/current/lib/x86_64-linux-gnu/ceph:/snap/lxd/current/zfs-2.2/lib PATH=/snap/lxd/current/zfs-2.2/bin:/snap/lxd/current/bin:$PATH nsenter --mount=/run/snapd/ns/lxd.mnt -- zpool list
NAME      SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
default  29.5G   621K  29.5G        -         -     0%     0%  1.00x    ONLINE  -
# LD_LIBRARY_PATH=/snap/lxd/current/lib/:/snap/lxd/current/lib/x86_64-linux-gnu/:/snap/lxd/current/lib/x86_64-linux-gnu/ceph:/snap/lxd/current/zfs-2.2/lib PATH=/snap/lxd/current/zfs-2.2/bin:/snap/lxd/current/bin:$PATH nsenter --mount=/run/snapd/ns/lxd.mnt -- zpool export default
# lxd init --auto --storage-backend=zfs

Note: latest/candidate is not affected by this so something changed recently.

h2. LVM issue

This one manifests on latest/edge and latest/candidate:

# snap install lxd --channel latest/candidate
lxd (candidate) 6.2-7a6ecda from Canonical✓ installed
# lxd init --auto --storage-backend=lvm
# snap remove --purge lxd
2025-02-03T22:22:40Z INFO Waiting for "snap.lxd.daemon.service" to stop.
lxd removed
# snap install lxd --channel latest/candidate
lxd (candidate) 6.2-7a6ecda from Canonical✓ installed
# lxd init --auto --storage-backend=lvm
Error: Failed to create storage pool "default": A volume group already exists called "default"

Similar cause/workaround:

# LD_LIBRARY_PATH=/snap/lxd/current/lib/:/snap/lxd/current/lib/x86_64-linux-gnu/:/snap/lxd/current/lib/x86_64-linux-gnu/ceph:/snap/lxd/current/zfs-2.2/lib PATH=/snap/lxd/current/zfs-2.2/bin:/snap/lxd/current/bin:$PATH nsenter --mount=/run/snapd/ns/lxd.mnt -- vgs
  VG      #PV #LV #SN Attr   VSize   VFree
  default   1   1   0 wz--n- <30.00g    0 
# LD_LIBRARY_PATH=/snap/lxd/current/lib/:/snap/lxd/current/lib/x86_64-linux-gnu/:/snap/lxd/current/lib/x86_64-linux-gnu/ceph:/snap/lxd/current/zfs-2.2/lib PATH=/snap/lxd/current/zfs-2.2/bin:/snap/lxd/current/bin:$PATH nsenter --mount=/run/snapd/ns/lxd.mnt -- vgremove default
Do you really want to remove volume group "default" containing 1 logical volumes? [y/n]: y
Do you really want to remove and DISCARD active logical volume default/LXDThinPool? [y/n]: y
  Logical volume "LXDThinPool" successfully removed.
  Volume group "default" successfully removed
# LD_LIBRARY_PATH=/snap/lxd/current/lib/:/snap/lxd/current/lib/x86_64-linux-gnu/:/snap/lxd/current/lib/x86_64-linux-gnu/ceph:/snap/lxd/current/zfs-2.2/lib PATH=/snap/lxd/current/zfs-2.2/bin:/snap/lxd/current/bin:$PATH nsenter --mount=/run/snapd/ns/lxd.mnt -- pvslxd.m  PV         VG Fmt  Attr PSize  PFree 
  /dev/loop9    lvm2 ---  30.00g 30.00g
# LD_LIBRARY_PATH=/snap/lxd/current/lib/:/snap/lxd/current/lib/x86_64-linux-gnu/:/snap/lxd/current/lib/x86_64-linux-gnu/ceph:/snap/lxd/current/zfs-2.2/lib PATH=/snap/lxd/current/zfs-2.2/bin:/snap/lxd/current/bin:$PATH nsenter --mount=/run/snapd/ns/lxd.mnt -- pvremove /dev/loop9
  Labels on physical volume "/dev/loop9" successfully wiped.
# lxd init --auto --storage-backend=lvm
@tomponline
Copy link
Member

h2. LVM issue

Please split this into a separate issue. Bundling them together is problematic when one aspect gets fixed but the other remains.

@tomponline
Copy link
Member

Note: latest/candidate is not affected by this so something changed recently.

@simondeziel are you ok to investigate this, by starting with analysing the differences in the zfs parts of the snap on latest/candidate vs latest/edge?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants