Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Failed creating a 1.22 cluster #2436

Closed
scraly opened this issue Aug 25, 2021 · 15 comments
Closed

Failed creating a 1.22 cluster #2436

scraly opened this issue Aug 25, 2021 · 15 comments
Assignees
Labels
kind/bug Categorizes issue or PR as related to a bug. triage/not-reproducible Indicates an issue can not be reproduced as described.

Comments

@scraly
Copy link

scraly commented Aug 25, 2021

Hi,
I can't create a cluster with kubernetes v1.22.0.

What happened:

$ kind create cluster --image kindest/node:v1.22.0 --name my-kube-1.22
Creating cluster "my-kube-1.22" ...
 ✓ Ensuring node image (kindest/node:v1.22.0) 🖼
 ✓ Preparing nodes 📦
 ✓ Writing configuration 📜
 ✗ Starting control-plane 🕹️
ERROR: failed to create cluster: failed to init node with kubeadm: command "docker exec --privileged my-kube-1.22-control-plane kubeadm init --skip-phases=preflight --config=/kind/kubeadm.conf --skip-token-print --v=6" failed with error: exit status 1
Command Output: I0825 10:00:52.478281     222 initconfiguration.go:247] loading configuration from "/kind/kubeadm.conf"
[config] WARNING: Ignored YAML document with GroupVersionKind kubeadm.k8s.io/v1beta2, Kind=JoinConfiguration
[init] Using Kubernetes version: v1.22.0
[certs] Using certificateDir folder "/etc/kubernetes/pki"
I0825 10:00:52.500373     222 certs.go:111] creating a new certificate authority for ca
[certs] Generating "ca" certificate and key
I0825 10:00:52.677296     222 certs.go:487] validating certificate period for ca certificate
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local localhost my-kube-1.22-control-plane] and IPs [10.96.0.1 172.18.0.7 127.0.0.1]
[certs] Generating "apiserver-kubelet-client" certificate and key
I0825 10:00:53.178539     222 certs.go:111] creating a new certificate authority for front-proxy-ca
[certs] Generating "front-proxy-ca" certificate and key
I0825 10:00:53.316190     222 certs.go:487] validating certificate period for front-proxy-ca certificate
[certs] Generating "front-proxy-client" certificate and key
I0825 10:00:53.553550     222 certs.go:111] creating a new certificate authority for etcd-ca
[certs] Generating "etcd/ca" certificate and key
I0825 10:00:53.709659     222 certs.go:487] validating certificate period for etcd/ca certificate
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [localhost my-kube-1.22-control-plane] and IPs [172.18.0.7 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [localhost my-kube-1.22-control-plane] and IPs [172.18.0.7 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
I0825 10:00:55.804136     222 certs.go:77] creating new public/private key files for signing service account users
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I0825 10:00:56.209511     222 kubeconfig.go:103] creating kubeconfig file for admin.conf
[kubeconfig] Writing "admin.conf" kubeconfig file
I0825 10:00:56.558570     222 kubeconfig.go:103] creating kubeconfig file for kubelet.conf
[kubeconfig] Writing "kubelet.conf" kubeconfig file
I0825 10:00:56.762241     222 kubeconfig.go:103] creating kubeconfig file for controller-manager.conf
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
I0825 10:00:56.964844     222 kubeconfig.go:103] creating kubeconfig file for scheduler.conf
[kubeconfig] Writing "scheduler.conf" kubeconfig file
I0825 10:00:57.079087     222 kubelet.go:65] Stopping the kubelet
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
I0825 10:00:57.203850     222 manifests.go:99] [control-plane] getting StaticPodSpecs
I0825 10:00:57.208547     222 certs.go:487] validating certificate period for CA certificate
I0825 10:00:57.208800     222 manifests.go:125] [control-plane] adding volume "ca-certs" for component "kube-apiserver"
I0825 10:00:57.208882     222 manifests.go:125] [control-plane] adding volume "etc-ca-certificates" for component "kube-apiserver"
I0825 10:00:57.208946     222 manifests.go:125] [control-plane] adding volume "k8s-certs" for component "kube-apiserver"
I0825 10:00:57.209065     222 manifests.go:125] [control-plane] adding volume "usr-local-share-ca-certificates" for component "kube-apiserver"
I0825 10:00:57.209128     222 manifests.go:125] [control-plane] adding volume "usr-share-ca-certificates" for component "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
I0825 10:00:57.223324     222 manifests.go:154] [control-plane] wrote static Pod manifest for component "kube-apiserver" to "/etc/kubernetes/manifests/kube-apiserver.yaml"
I0825 10:00:57.223384     222 manifests.go:99] [control-plane] getting StaticPodSpecs
I0825 10:00:57.224029     222 manifests.go:125] [control-plane] adding volume "ca-certs" for component "kube-controller-manager"
I0825 10:00:57.224054     222 manifests.go:125] [control-plane] adding volume "etc-ca-certificates" for component "kube-controller-manager"
I0825 10:00:57.224073     222 manifests.go:125] [control-plane] adding volume "flexvolume-dir" for component "kube-controller-manager"
I0825 10:00:57.224093     222 manifests.go:125] [control-plane] adding volume "k8s-certs" for component "kube-controller-manager"
I0825 10:00:57.224107     222 manifests.go:125] [control-plane] adding volume "kubeconfig" for component "kube-controller-manager"
I0825 10:00:57.224139     222 manifests.go:125] [control-plane] adding volume "usr-local-share-ca-certificates" for component "kube-controller-manager"
I0825 10:00:57.224154     222 manifests.go:125] [control-plane] adding volume "usr-share-ca-certificates" for component "kube-controller-manager"
I0825 10:00:57.225368     222 manifests.go:154] [control-plane] wrote static Pod manifest for component "kube-controller-manager" to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
I0825 10:00:57.225424     222 manifests.go:99] [control-plane] getting StaticPodSpecs
[control-plane] Creating static Pod manifest for "kube-scheduler"
I0825 10:00:57.226123     222 manifests.go:125] [control-plane] adding volume "kubeconfig" for component "kube-scheduler"
I0825 10:00:57.228683     222 manifests.go:154] [control-plane] wrote static Pod manifest for component "kube-scheduler" to "/etc/kubernetes/manifests/kube-scheduler.yaml"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I0825 10:00:57.232440     222 local.go:65] [etcd] wrote Static Pod manifest for a local etcd member to "/etc/kubernetes/manifests/etcd.yaml"
I0825 10:00:57.232535     222 waitcontrolplane.go:89] [wait-control-plane] Waiting for the API server to be healthy
I0825 10:00:57.234275     222 loader.go:372] Config loaded from file:  /etc/kubernetes/admin.conf
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
I0825 10:00:57.279793     222 round_trippers.go:454] GET https://my-kube-1.22-control-plane:6443/healthz?timeout=10s  in 9 milliseconds
I0825 10:00:57.782712     222 round_trippers.go:454] GET https://my-kube-1.22-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I0825 10:00:58.282246     222 round_trippers.go:454] GET https://my-kube-1.22-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I0825 10:00:58.781946     222 round_trippers.go:454] GET https://my-kube-1.22-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I0825 10:00:59.282343     222 round_trippers.go:454] GET https://my-kube-1.22-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I0825 10:00:59.782205     222 round_trippers.go:454] GET https://my-kube-1.22-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I0825 10:01:00.282920     222 round_trippers.go:454] GET https://my-kube-1.22-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I0825 10:01:00.782485     222 round_trippers.go:454] GET https://my-kube-1.22-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I0825 10:01:01.283610     222 round_trippers.go:454] GET https://my-kube-1.22-control-plane:6443/healthz?timeout=10s  in 2 milliseconds
I0825 10:01:01.782496     222 round_trippers.go:454] GET https://my-kube-1.22-control-plane:6443/healthz?timeout=10s  in 2 milliseconds
I0825 10:01:02.282619     222 round_trippers.go:454] GET https://my-kube-1.22-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I0825 10:01:02.782083     222 round_trippers.go:454] GET https://my-kube-1.22-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I0825 10:01:03.281558     222 round_trippers.go:454] GET https://my-kube-1.22-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I0825 10:01:03.781529     222 round_trippers.go:454] GET https://my-kube-1.22-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I0825 10:01:04.282191     222 round_trippers.go:454] GET https://my-kube-1.22-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I0825 10:01:04.781799     222 round_trippers.go:454] GET https://my-kube-1.22-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I0825 10:01:05.281392     222 round_trippers.go:454] GET https://my-kube-1.22-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I0825 10:01:05.781109     222 round_trippers.go:454] GET https://my-kube-1.22-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I0825 10:01:06.282098     222 round_trippers.go:454] GET https://my-kube-1.22-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I0825 10:01:06.781832     222 round_trippers.go:454] GET https://my-kube-1.22-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I0825 10:01:07.282118     222 round_trippers.go:454] GET https://my-kube-1.22-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I0825 10:01:07.782364     222 round_trippers.go:454] GET https://my-kube-1.22-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I0825 10:01:08.287689     222 round_trippers.go:454] GET https://my-kube-1.22-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I0825 10:01:08.782547     222 round_trippers.go:454] GET https://my-kube-1.22-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I0825 10:01:09.283100     222 round_trippers.go:454] GET https://my-kube-1.22-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I0825 10:01:19.761570     222 round_trippers.go:454] GET https://my-kube-1.22-control-plane:6443/healthz?timeout=10s  in 10009 milliseconds
I0825 10:01:30.298082     222 round_trippers.go:454] GET https://my-kube-1.22-control-plane:6443/healthz?timeout=10s  in 10039 milliseconds
[kubelet-check] Initial timeout of 40s passed.
I0825 10:01:40.845080     222 round_trippers.go:454] GET https://my-kube-1.22-control-plane:6443/healthz?timeout=10s  in 10022 milliseconds
I0825 10:01:51.283154     222 round_trippers.go:454] GET https://my-kube-1.22-control-plane:6443/healthz?timeout=10s  in 10017 milliseconds
I0825 10:02:01.755295     222 round_trippers.go:454] GET https://my-kube-1.22-control-plane:6443/healthz?timeout=10s  in 9994 milliseconds
I0825 10:02:10.138265     222 round_trippers.go:454] GET https://my-kube-1.22-control-plane:6443/healthz?timeout=10s 500 Internal Server Error in 7872 milliseconds
I0825 10:02:13.996194     222 round_trippers.go:454] GET https://my-kube-1.22-control-plane:6443/healthz?timeout=10s 500 Internal Server Error in 3770 milliseconds
I0825 10:02:17.150476     222 round_trippers.go:454] GET https://my-kube-1.22-control-plane:6443/healthz?timeout=10s 500 Internal Server Error in 2920 milliseconds
I0825 10:02:23.826265     222 round_trippers.go:454] GET https://my-kube-1.22-control-plane:6443/healthz?timeout=10s 500 Internal Server Error in 6635 milliseconds
I0825 10:02:28.250642     222 round_trippers.go:454] GET https://my-kube-1.22-control-plane:6443/healthz?timeout=10s 500 Internal Server Error in 4017 milliseconds
I0825 10:02:38.847583     222 round_trippers.go:454] GET https://my-kube-1.22-control-plane:6443/healthz?timeout=10s  in 10043 milliseconds
I0825 10:02:46.165652     222 round_trippers.go:454] GET https://my-kube-1.22-control-plane:6443/healthz?timeout=10s  in 6952 milliseconds
I0825 10:02:46.178291     222 with_retry.go:171] Got a Retry-After 1s response for attempt 1 to https://my-kube-1.22-control-plane:6443/healthz?timeout=10s
I0825 10:02:47.236849     222 round_trippers.go:454] GET https://my-kube-1.22-control-plane:6443/healthz?timeout=10s  in 25 milliseconds
I0825 10:02:47.652091     222 round_trippers.go:454] GET https://my-kube-1.22-control-plane:6443/healthz?timeout=10s  in 4 milliseconds
I0825 10:02:48.145606     222 round_trippers.go:454] GET https://my-kube-1.22-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I0825 10:02:55.469843     222 round_trippers.go:454] GET https://my-kube-1.22-control-plane:6443/healthz?timeout=10s 500 Internal Server Error in 6786 milliseconds
I0825 10:02:56.244766     222 round_trippers.go:454] GET https://my-kube-1.22-control-plane:6443/healthz?timeout=10s 500 Internal Server Error in 564 milliseconds
I0825 10:03:06.749312     222 round_trippers.go:454] GET https://my-kube-1.22-control-plane:6443/healthz?timeout=10s  in 10039 milliseconds
I0825 10:03:17.266398     222 round_trippers.go:454] GET https://my-kube-1.22-control-plane:6443/healthz?timeout=10s  in 10035 milliseconds
I0825 10:03:27.886935     222 round_trippers.go:454] GET https://my-kube-1.22-control-plane:6443/healthz?timeout=10s  in 9992 milliseconds
I0825 10:03:35.914296     222 round_trippers.go:454] GET https://my-kube-1.22-control-plane:6443/healthz?timeout=10s 500 Internal Server Error in 7713 milliseconds
I0825 10:03:46.188635     222 round_trippers.go:454] GET https://my-kube-1.22-control-plane:6443/healthz?timeout=10s  in 9947 milliseconds
I0825 10:03:56.665464     222 round_trippers.go:454] GET https://my-kube-1.22-control-plane:6443/healthz?timeout=10s  in 10047 milliseconds
I0825 10:04:03.562643     222 round_trippers.go:454] GET https://my-kube-1.22-control-plane:6443/healthz?timeout=10s 500 Internal Server Error in 6392 milliseconds
I0825 10:04:14.699100     222 round_trippers.go:454] GET https://my-kube-1.22-control-plane:6443/healthz?timeout=10s  in 10073 milliseconds
I0825 10:04:25.128622     222 round_trippers.go:454] GET https://my-kube-1.22-control-plane:6443/healthz?timeout=10s  in 10023 milliseconds
I0825 10:04:35.675110     222 round_trippers.go:454] GET https://my-kube-1.22-control-plane:6443/healthz?timeout=10s  in 9984 milliseconds
I0825 10:04:46.113944     222 round_trippers.go:454] GET https://my-kube-1.22-control-plane:6443/healthz?timeout=10s  in 10021 milliseconds
I0825 10:04:56.611779     222 round_trippers.go:454] GET https://my-kube-1.22-control-plane:6443/healthz?timeout=10s  in 10012 milliseconds
I0825 10:05:07.107218     222 round_trippers.go:454] GET https://my-kube-1.22-control-plane:6443/healthz?timeout=10s  in 10014 milliseconds
I0825 10:05:17.195250     222 round_trippers.go:454] GET https://my-kube-1.22-control-plane:6443/healthz?timeout=10s  in 10049 milliseconds

	Unfortunately, an error has occurred:
		timed out waiting for the condition

	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)

	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'

	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.

	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
		- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock logs CONTAINERID'

couldn't initialize a Kubernetes cluster
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/init.runWaitControlPlanePhase
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/init/waitcontrolplane.go:116
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).Run.func1
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow/runner.go:234
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).visitAll
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow/runner.go:421
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).Run
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow/runner.go:207
k8s.io/kubernetes/cmd/kubeadm/app/cmd.newCmdInit.func1
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/init.go:153
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).execute
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:852
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).ExecuteC
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:960
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).Execute
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:897
k8s.io/kubernetes/cmd/kubeadm/app.Run
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/kubeadm.go:50
main.main
	_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/kubeadm.go:25
runtime.main
	/usr/local/go/src/runtime/proc.go:225
runtime.goexit
	/usr/local/go/src/runtime/asm_amd64.s:1371
error execution phase wait-control-plane
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).Run.func1
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow/runner.go:235
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).visitAll
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow/runner.go:421
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).Run
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow/runner.go:207
k8s.io/kubernetes/cmd/kubeadm/app/cmd.newCmdInit.func1
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/init.go:153
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).execute
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:852
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).ExecuteC
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:960
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).Execute
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:897
k8s.io/kubernetes/cmd/kubeadm/app.Run
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/kubeadm.go:50
main.main
	_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/kubeadm.go:25
runtime.main
	/usr/local/go/src/runtime/proc.go:225
runtime.goexit
	/usr/local/go/src/runtime/asm_amd64.s:1371

What you expected to happen:

I expected a cluster creating and running :-).

How to reproduce it (as minimally and precisely as possible):

I'm on MacOS.
I execute the following command:

$ kind create cluster --image kindest/node:v1.22.0 --name my-kube-1.22

Anything else we need to know?:

Environment:

  • kind version: (use kind version):
kind v0.11.1 go1.16.4 darwin/amd64
  • Kubernetes version: (use kubectl version):
Client Version: version.Info{Major:"1", Minor:"19", GitVersion:"v1.19.7", GitCommit:"1dd5338295409edcfff11505e7bb246f0d325d15", GitTreeState:"clean", BuildDate:"2021-01-13T13:23:52Z", GoVersion:"go1.15.5", Compiler:"gc", Platform:"darwin/amd64"}
  • Docker version: (use docker info):
Client:
 Context:    default
 Debug Mode: false
 Plugins:
  app: Docker App (Docker Inc., v0.9.1-beta3)
  buildx: Build with BuildKit (Docker Inc., v0.5.1-docker)
  compose: Docker Compose (Docker Inc., 2.0.0-beta.1)
  scan: Docker Scan (Docker Inc., v0.8.0)

Server:
 Containers: 38
  Running: 5
  Paused: 0
  Stopped: 33
 Images: 52
 Server Version: 20.10.6
 Storage Driver: overlay2
  Backing Filesystem: extfs
  Supports d_type: true
  Native Overlay Diff: true
  userxattr: false
 Logging Driver: json-file
 Cgroup Driver: cgroupfs
 Cgroup Version: 1
 Plugins:
  Volume: local
  Network: bridge host ipvlan macvlan null overlay
  Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
 Swarm: inactive
 Runtimes: io.containerd.runtime.v1.linux runc io.containerd.runc.v2
 Default Runtime: runc
 Init Binary: docker-init
 containerd version: 05f951a3781f4f2c1911b05e61c160e9c30eaa8e
 runc version: 12644e614e25b05da6fd08a38ffa0cfe1903fdec
 init version: de40ad0
 Security Options:
  seccomp
   Profile: default
 Kernel Version: 5.10.25-linuxkit
 Operating System: Docker Desktop
 OSType: linux
 Architecture: x86_64
 CPUs: 6
 Total Memory: 1.941GiB
 Name: docker-desktop
 ID: 26GW:TLZD:SGWC:JG5R:5LB7:3P2Z:Q2NC:CJ7O:LMBI:RNNI:WAF2:NPHF
 Docker Root Dir: /var/lib/docker
 Debug Mode: false
 HTTP Proxy: http.docker.internal:3128
 HTTPS Proxy: http.docker.internal:3128
 Registry: https://index.docker.io/v1/
 Labels:
 Experimental: true
 Insecure Registries:
  127.0.0.0/8
 Live Restore Enabled: false
@scraly scraly added the kind/bug Categorizes issue or PR as related to a bug. label Aug 25, 2021
@BenTheElder
Copy link
Member

BenTheElder commented Aug 25, 2021

Total Memory: 1.941GiB

that may be your problem. kubernetes fluctuates in requirements across version and it isn't tightly scoped / scale down is not focused on upstream < 2gb is below kubeadm's recommendations, though kind aims to work in less. (and also some of this memory will be going to things other than this cluster you've created)

can you run cluster creation with --retain and then share kind export logs?

also FWIW:

--image kindest/node:v1.20.2

This is not recommended, please pin the image digests. (see the release notes and the docs for more on that https://github.com/kubernetes-sigs/kind/releases/tag/v0.11.1)

@scraly
Copy link
Author

scraly commented Aug 25, 2021

Thanks for your response.
I tried to execute this new command:

$ kind create cluster --image kindest/node:v1.22.0@sha256:b8bda84bb3a190e6e028b1760d277454a72267a5454b57db34437c34a588d047 --name my-kube-1.22 --retain

Here the logs:
kind-logs.zip

@BenTheElder
Copy link
Member

hmm it seems we didn't get any node logs, off chance perhaps the kind create cluster had --name --my-kube-1.22 but kind export logs didn't have --name my-kube-1.22? (all cluster commands require --name if not using the default).

@scraly
Copy link
Author

scraly commented Aug 26, 2021

Good point.

I re executed the command again:

$ kind create cluster --image kindest/node:v1.22.0@sha256:b8bda84bb3a190e6e028b1760d277454a72267a5454b57db34437c34a588d047 --name my-kube-1.22.0 --retain
Creating cluster "my-kube-1.22.0" ...
 ✓ Ensuring node image (kindest/node:v1.22.0) 🖼
 ✓ Preparing nodes 📦
 ✓ Writing configuration 📜
 ✗ Starting control-plane 🕹️
ERROR: failed to create cluster: failed to init node with kubeadm: command "docker exec --privileged my-kube-1.22.0-control-plane kubeadm init --skip-phases=preflight --config=/kind/kubeadm.conf --skip-token-print --v=6" failed with error: exit status 1
Command Output: I0826 07:15:38.227789     191 initconfiguration.go:247] loading configuration from "/kind/kubeadm.conf"
[config] WARNING: Ignored YAML document with GroupVersionKind kubeadm.k8s.io/v1beta2, Kind=JoinConfiguration
[init] Using Kubernetes version: v1.22.0
...

kind-logs-1.22.0.zip

And exported again the logs :-).

$ kind export logs --name my-kube-1.22.0
Exporting logs for cluster "my-kube-1.22.0" to:
/private/var/folders/mw/mrh8kyx94kn1rz63pq1fcpq80000gn/T/234060201

@BenTheElder
Copy link
Member

I've been in-and-out and didn't get back to this. Looking at the zip you uploaded we got no node logs there which is suspicious, there should at least be the docker logs of the node container if nothing else.

@BenTheElder
Copy link
Member

BenTheElder commented Sep 10, 2021

FWIW kind create cluster --image kindest/node:v1.22.0@sha256:b8bda84bb3a190e6e028b1760d277454a72267a5454b57db34437c34a588d047 --name my-kube-1.22.0 --retain works as intended on my mac with kind v0.11.1. I can't reproduce this.

$ kind create cluster --image kindest/node:v1.22.0@sha256:b8bda84bb3a190e6e028b1760d277454a72267a5454b57db34437c34a588d047 --name my-kube-1.22.0 --retain
Creating cluster "my-kube-1.22.0" ...
 ✓ Ensuring node image (kindest/node:v1.22.0) 🖼 
 ✓ Preparing nodes 📦  
 ✓ Writing configuration 📜 
 ✓ Starting control-plane 🕹️ 
 ✓ Installing CNI 🔌 
 ✓ Installing StorageClass 💾 
Set kubectl context to "kind-my-kube-1.22.0"
You can now use your cluster with:

kubectl cluster-info --context kind-my-kube-1.22.0

Not sure what to do next? 😅  Check out https://kind.sigs.k8s.io/docs/user/quick-start/

Maybe you don't have enough free resources in the docker VM?

@BenTheElder BenTheElder added the triage/not-reproducible Indicates an issue can not be reproduced as described. label Sep 10, 2021
@denis-tingaikin
Copy link

kind create cluster --image kindest/node:v1.22.0

and

kind create cluster --image kindest/node:v1.22.1

is working with kind 0.11.1 on my machine

@nayihz
Copy link

nayihz commented Oct 12, 2021

FWIW kind create cluster --image kindest/node:v1.22.0@sha256:b8bda84bb3a190e6e028b1760d277454a72267a5454b57db34437c34a588d047 --name my-kube-1.22.0 --retain works as intended on my mac with kind v0.11.1. I can't reproduce this.

$ kind create cluster --image kindest/node:v1.22.0@sha256:b8bda84bb3a190e6e028b1760d277454a72267a5454b57db34437c34a588d047 --name my-kube-1.22.0 --retain
Creating cluster "my-kube-1.22.0" ...
 ✓ Ensuring node image (kindest/node:v1.22.0) 🖼 
 ✓ Preparing nodes 📦  
 ✓ Writing configuration 📜 
 ✓ Starting control-plane 🕹️ 
 ✓ Installing CNI 🔌 
 ✓ Installing StorageClass 💾 
Set kubectl context to "kind-my-kube-1.22.0"
You can now use your cluster with:

kubectl cluster-info --context kind-my-kube-1.22.0

Not sure what to do next? 😅  Check out https://kind.sigs.k8s.io/docs/user/quick-start/

Maybe you don't have enough free resources in the docker VM?

I will create a cluster successfully when using offical image(eg. kindest/node:v1.21.1). But It will be failed to create a cluster with the image which was build by myself, and the panic log is the same with this. @BenTheElder

Here is the log
logs.zip

@BenTheElder
Copy link
Member

Unlike the original issue here, you have node logs, so something is different, we still have no idea what @scraly's issue is.

@cmssczy for some reason yours is trying to pull the pause image, which it should not be doing ... 🤔 (also it can't seem to reach the upstream host to pull that image). I don't know why your image doesn't have the pause image but you can see it's not in the images being unpacked at containerd startup. Something is wrong with the image you are running.

@BenTheElder
Copy link
Member

@cmssczy Can you file a new support ticket with some more details about your host environment and what steps you are taking to build the image etc? then we can debug the pause image issue there.

@nayihz
Copy link

nayihz commented Oct 12, 2021

@cmssczy Can you file a new support ticket with some more details about your host environment and what steps you are taking to build the image etc? then we can debug the pause image issue there.

thanks for your reply. I will create a new issue for this. @BenTheElder

@BenTheElder
Copy link
Member

Closing as not reproducible for now. Other user issues seem unrelated.

If we get more details, or a reproducer, will revisit.

@haapjari
Copy link

haapjari commented May 16, 2022

I had this exactly the same problem. I was an environmental issue with proxies in my case, maybe that could apply to you aswell.

My problem was, when using Docker Desktop with WSL2, for some reason the Containers inherited my proxy settings from ~/.docker/config.json - thus the container was trying to ping Kubernetes API Server through proxy inside container - causing this to timeout. This was even though proxies were unset with env | grep proxy.

When the control-plane is loading, you can docker exec -it <container_id> bash into that control-plane container and run systemctl status kubelet and journalctl -u kubelet -e to see the kubelet status.

So either empty those proxy settings and try again or add the API Server to noProxy variable.

@BenTheElder
Copy link
Member

My problem was, when using Docker Desktop with WSL2, for some reason the Containers inherited my proxy settings from ~/.docker/config.json

This is how docker behaves, I don't think there's a way to override this, but we could detect it and treat it similar to when the env are set #1175

This was even though proxies were unset with env | grep proxy

FWIW you'll want to do env | grep -i proxy since there's no standard here and kind supports HTTP_PROXY and http_proxy etc.

So either empty those proxy settings and try again or add the API Server to noProxy variable.

The latter is something kind ought to be handling better, we auto inject things into no_proxy, we should be ensuring the api server is in there. At the moment we only inject no_proxy if there are any proxy env though.

#1175

@haapjari
Copy link

Thanks for the reply. Maybe there should be some kind of mention in the documentation - that if you are in for example in corporate proxy environment (like in my case) - these deployments needs to happen without proxies set, since they're inherited in the containers.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Categorizes issue or PR as related to a bug. triage/not-reproducible Indicates an issue can not be reproduced as described.
Projects
None yet
Development

No branches or pull requests

5 participants