Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Resource Definition isn't being deleted after PVC/PV deletion #762

Open
maxpain opened this issue Jan 20, 2025 · 5 comments
Open

Resource Definition isn't being deleted after PVC/PV deletion #762

maxpain opened this issue Jan 20, 2025 · 5 comments

Comments

@maxpain
Copy link

maxpain commented Jan 20, 2025

Hello. When I delete PVC+PV, the corresponding DRBD/Linstor resources aren't being deleted and I have to delete them manually.

I use this StorageClass:

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: nvme-lvm-replicated-async
  annotations:
    storageclass.kubernetes.io/is-default-class: "true"
provisioner: linstor.csi.linbit.com
parameters:
  linstor.csi.linbit.com/usePvcName: "true"
  linstor.csi.linbit.com/storagePool: nvme-lvm
  linstor.csi.linbit.com/autoPlace: "3"
  linstor.csi.linbit.com/layerList: "drbd storage"
  linstor.csi.linbit.com/allowRemoteVolumeAccess: "false"
  linstor.csi.linbit.com/mountOpts: discard
  property.linstor.csi.linbit.com/DrbdOptions/Net/protocol: "A"
volumeBindingMode: WaitForFirstConsumer
allowVolumeExpansion: true

Please note that I use usePvcName: true parameter here. Maybe that's the reason, I don't know.

@WanzenBug
Copy link
Member

Can you check the logs of the linstor-csi-controller? That usually contains logs on all create/delete actions.

Please note that I use usePvcName: true parameter here. Maybe that's the reason, I don't know.

Could very well be. Why are you using it? We do not recommend it, as it generally tends to break some assumptions made by Kubernetes.

@maxpain
Copy link
Author

maxpain commented Jan 20, 2025

Why are you using it?

Because it's much easier to use linstor and lvm commands.

@maxpain
Copy link
Author

maxpain commented Jan 20, 2025

I couldn't reproduce this bug. But I faced it two times before.
Unfortunately, I don't have logs because I restarted the Piraeus pods.

Next time, I will save the logs.

@liniac
Copy link

liniac commented Jan 30, 2025

This issue appears to be the same as seen in the internally-tracked LINBIT client case Ticket#2024090810000031, further details for the developers may be found there.

@bernardgut
Copy link

bernardgut commented Feb 1, 2025

Hello I just realized I have the same issue. I did not set usePvcName. In fact I never heard of this parameter as it is not in the manual until I read this thread. After noticing a volume that was not deleted that was causing issues:

Image

I noticed

Image

I then saw this thread and noticed that kubectl linstor rd l returned 295 entries each with the "ok" next to them while I have 28 PV in the cluster. For reference, I have

k get sc -o yaml
apiVersion: v1
items:
- allowVolumeExpansion: true
  apiVersion: storage.k8s.io/v1
  kind: StorageClass
  metadata:
    creationTimestamp: "2024-09-09T10:07:37Z"
    name: ha-hdd
    resourceVersion: "885"
    uid: 29b37297-7f8b-4f82-a0c6-256b8746bd18
  parameters:
    allowRemoteVolumeAccess: "false"
    csi.storage.k8s.io/fstype: xfs
    linstor.csi.linbit.com/autoPlace: "2"
    linstor.csi.linbit.com/mountOpts: noatime
    linstor.csi.linbit.com/storagePool: hdd-thin
  provisioner: linstor.csi.linbit.com
  reclaimPolicy: Delete
  volumeBindingMode: WaitForFirstConsumer
- allowVolumeExpansion: true
  apiVersion: storage.k8s.io/v1
  kind: StorageClass
  metadata:
    creationTimestamp: "2024-09-09T10:07:38Z"
    name: ha-nvme
    resourceVersion: "893"
    uid: c5441bb5-7784-4c76-a5fd-22a48bb4f641
  parameters:
    allowRemoteVolumeAccess: "false"
    csi.storage.k8s.io/fstype: xfs
    linstor.csi.linbit.com/autoPlace: "2"
    linstor.csi.linbit.com/mountOpts: noatime
    linstor.csi.linbit.com/storagePool: nvme-thin
  provisioner: linstor.csi.linbit.com
  reclaimPolicy: Delete
  volumeBindingMode: WaitForFirstConsumer
- allowVolumeExpansion: true
  apiVersion: storage.k8s.io/v1
  kind: StorageClass
  metadata:
    annotations:
      kubectl.kubernetes.io/last-applied-configuration: |
        {"allowVolumeExpansion":true,"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":{"annotations":{},"creationTimestamp":"2024-09-09T10:07:42Z","labels":{"app":"nfs-server-provisioner","chart":"nfs-server-provisioner-1.8.0","heritage":"Helm","release":"nfs-server-provisioner"},"name":"nfs","resourceVersion":"950","uid":"071171f2-1ac2-41f5-a19d-a571ccf95eaf"},"mountOptions":["vers=4.2","retrans=2","timeo=30"],"provisioner":"sekops.ch/nfs-server-provisioner","reclaimPolicy":"Retain","volumeBindingMode":"Immediate"}
    creationTimestamp: "2024-09-18T20:51:26Z"
    labels:
      app: nfs-server-provisioner
      chart: nfs-server-provisioner-1.8.0
      heritage: Helm
      release: nfs-server-provisioner
    name: nfs
    resourceVersion: "5368835"
    uid: 00c22a87-49b2-4f45-9ca8-2a883ded8b61
  mountOptions:
  - vers=4.2
  - retrans=2
  - timeo=30
  provisioner: sekops.ch/nfs-server-provisioner
  reclaimPolicy: Retain
  volumeBindingMode: Immediate
- allowVolumeExpansion: true
  apiVersion: storage.k8s.io/v1
  kind: StorageClass
  metadata:
    annotations:
      storageclass.kubernetes.io/is-default-class: "true"
    creationTimestamp: "2024-09-09T10:07:38Z"
    name: sa-hdd
    resourceVersion: "890"
    uid: 18f5c6f5-154f-4715-8bd5-2c12b0287964
  parameters:
    allowRemoteVolumeAccess: "false"
    csi.storage.k8s.io/fstype: xfs
    linstor.csi.linbit.com/autoPlace: "1"
    linstor.csi.linbit.com/mountOpts: noatime
    linstor.csi.linbit.com/storagePool: hdd-thin
  provisioner: linstor.csi.linbit.com
  reclaimPolicy: Delete
  volumeBindingMode: WaitForFirstConsumer
- allowVolumeExpansion: true
  apiVersion: storage.k8s.io/v1
  kind: StorageClass
  metadata:
    creationTimestamp: "2024-09-09T10:07:39Z"
    name: sa-nvme
    resourceVersion: "898"
    uid: 86456351-0993-46d8-a355-efd4dbab34c2
  parameters:
    allowRemoteVolumeAccess: "false"
    csi.storage.k8s.io/fstype: xfs
    linstor.csi.linbit.com/autoPlace: "1"
    linstor.csi.linbit.com/mountOpts: noatime
    linstor.csi.linbit.com/storagePool: nvme-thin
  provisioner: linstor.csi.linbit.com
  reclaimPolicy: Delete
  volumeBindingMode: WaitForFirstConsumer
kind: List
metadata:
  resourceVersion: ""

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants