Skip to content

Ghost Dynamic Host Volumes after cluster rebuild #27313

@katiekloss

Description

@katiekloss

Nomad version

Nomad v1.11.0
BuildDate 2025-11-11T16:18:19Z
Revision 9103d93

Operating system and Environment details

Alpine Edge

Issue

I recently wiped all servers in a 3-member cluster and bootstrapped from scratch. Prior to rebuilding, I had about a dozen Dynamic Host Volumes spread across 4 clients. During the rebuild, I removed the contents of the server directory on each node, and left the client directory intact.

After bootstrapping, attempting to create a Dynamic Host Volume on a specific node fails if the node previously contained a volume with the same name, but the volumes don't appear in nomad volume status. Further, these "ghost" volumes appear on the clients in the web UI, but the scheduler fails to place task groups that mount them.

The only workarounds I've found are to change the name of the volume, or to register the volume on a new node.

Reproduction steps

  • Build a cluster and create a Dynamic Host Volume on any node. Note the node ID.
  • Delete the contents of each server's server directory and bootstrap a new cluster.
  • Observe that the volume is present on the node's status page in the web UI.
  • Register a job with a task group which claims the volume.

Expected Result

The scheduler should place the task group on the same node as the volume.

Actual Result

Constraint *missing compatible host volumes* filtered 4 nodes

Job file (if appropriate)

    volume "data" {
      source = "paperless-data"
      type = "host"
    }

Nomad Client logs (if appropriate)

When creating a volume with the same definition:

Error creating volume: Unexpected response code: 500 (rpc error: HostVolume.Create error: volume name already exists on this node)

Metadata

Metadata

Assignees

Labels

Type

No type

Projects

Status

Triaging

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions