Skip to content

Cluster Created with kind Fails to Mount containerd HostPath #83

Closed
@aauren

Description

@aauren

What steps did you take and what happened:
When creating a kubemark cluster using kind and capd kubemark pods stay in ContainerCreating status with an error in the description saying that they have FailedMount:

Warning  FailedMount  10s (x6 over 26s)  kubelet            MountVolume.SetUp failed for volume "containerd-sock" : hostPath type check failed: unix:///run/containerd/containerd.sock is not a socket file

Going into the kubelet container in docker shows that the file exists and is a socket:

% kubectl get pods -o wide -n default
NAME                                 READY   STATUS              RESTARTS   AGE     IP       NODE              NOMINATED NODE   READINESS GATES
kube-node-mgmt-kubemark-md-0-62h9n   0/1     ContainerCreating   0          5m12s   <none>   kubemark-worker   <none>           <none>
kube-node-mgmt-kubemark-md-0-kks4r   0/1     ContainerCreating   0          5m12s   <none>   kubemark-worker   <none>           <none>
kube-node-mgmt-kubemark-md-0-mkt6m   0/1     ContainerCreating   0          5m12s   <none>   kubemark-worker   <none>           <none>
kube-node-mgmt-kubemark-md-0-wpdhp   0/1     ContainerCreating   0          5m12s   <none>   kubemark-worker   <none>           <none>

% docker exec -it kubemark-worker /bin/bash

root@kubemark-worker:/# ls -l /run/containerd/containerd.sock
srw-rw---- 1 root root 0 Apr  5 20:15 /run/containerd/containerd.sock

Steps to Reproduce:

  1. Add kubemark provider to clusterctl.yaml:
providers:
- name: "kubemark"
  url: "https://github.com/kubernetes-sigs/cluster-api-provider-kubemark/releases/v0.5.0/infrastructure-components.yaml"
  type: "InfrastructureProvider"
  1. Create kind cluster config:
% cat kubemark.yaml
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
name: kubemark
nodes:
- role: control-plane
  extraMounts:
  - containerPath: /var/run/docker.sock
    hostPath: /var/run/docker.sock
- role: worker
  extraMounts:
  - containerPath: /var/run/docker.sock
    hostPath: /var/run/docker.sock
  1. Create kind cluster using config:
kind create cluster --config kubemark.yaml
  1. Initialize CAPI cluster:
export CLUSTER_TOPOLOGY=true
clusterctl init --infrastructure kubemark,docker
  1. Wait until CAPI cluster pods are fully deployed
  2. Create kubeadm cluster and apply it
export SERVICE_CIDR=["172.17.0.0/16"]
export POD_CIDR=["192.168.122.0/24"]
clusterctl generate cluster kube-node-mgmt --infrastructure kubemark --flavor capd --kubernetes-version 1.26.3 --control-plane-machine-count=1 --worker-machine-count=4 | kubectl apply -f-
  1. Wait for CAPD controller to launch containers and for pods to be created in the default namespace and then watch them stop at ContainerCreating

What did you expect to happen:

I expected the kubemark / CAPD cluster to come up and for pods to enter running state

Anything else you would like to add:

I tried using minikube instead of kind to create the cluster and ran into the same issue with the containerd socket not mounting.

I was originally using Kubernetes 1.23.X to test against, but found the original issue where CAPD was switched to use the unix:/// style socket specification in the HostMount and it mentioned problems with 1.24.X versions of k8s so I switched to 1.26.3. But no matter what I try I can't seem to get past this error: kubernetes-sigs/cluster-api#6155

I'm using Docker version: 23.0.1

Environment:

  • cluster-api version:
% clusterctl version
clusterctl version: &version.Info{Major:"1", Minor:"4", GitVersion:"v1.4.1", GitCommit:"39d87e91080088327c738c43f39e46a7f557d03b", GitTreeState:"clean", BuildDate:"2023-04-04T17:31:43Z", GoVersion:"go1.19.6", Compiler:"gc", Platform:"linux/amd64"}
  • cluster-api-provider-kubemark version: v0.5.0
  • Kubernetes version: (use kubectl version):
% kubectl version --short
Flag --short has been deprecated, and will be removed in the future. The --short output will become the default.
Client Version: v1.26.3
Kustomize Version: v4.5.7
Server Version: v1.26.3
  • OS (e.g. from /etc/os-release): Ubuntu 22.04.2

/kind bug
[One or more /area label. See https://github.com/kubernetes-sigs/cluster-api-provider-kubemark/labels?q=area for the list of labels]

Metadata

Metadata

Assignees

No one assigned

    Labels

    kind/bugCategorizes issue or PR as related to a bug.

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions