Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

WSL2: Unable to connect to cluster with kubectl #3094

Closed
Jont828 opened this issue Feb 13, 2023 · 11 comments
Closed

WSL2: Unable to connect to cluster with kubectl #3094

Jont828 opened this issue Feb 13, 2023 · 11 comments
Assignees
Labels
kind/external upstream bugs

Comments

@Jont828
Copy link

Jont828 commented Feb 13, 2023

What happened: After running wsl --update I am unable to access my kind clusters with any kubectl command. When I run kubectl get pods for example, I get the following output:

The connection to the server 127.0.0.1:34047 was refused - did you specify the right host or port?

I also see the following Docker containers:

CONTAINER ID   IMAGE                  COMMAND                  CREATED          STATUS          PORTS                       NAMES
4cbc52d137d9   kindest/node:v1.25.3   "/usr/local/bin/entr…"   28 seconds ago   Up 27 seconds   127.0.0.1:34047->6443/tcp   kind-control-plane
500593a33f7c   registry:2             "/entrypoint.sh /etc…"   3 days ago       Up 8 hours      127.0.0.1:5000->5000/tcp    kind-registry

However, if I run docker exec -it kind-control-plane bash, I can get the kubectl commands working within there and I see that the pods are running.

root@kind-control-plane:/# kubectl get pods -A
NAMESPACE            NAME                                         READY   STATUS    RESTARTS   AGE
kube-system          coredns-565d847f94-jdkrg                     1/1     Running   0          23s
kube-system          coredns-565d847f94-xcppj                     1/1     Running   0          23s
kube-system          etcd-kind-control-plane                      1/1     Running   0          38s
kube-system          kindnet-7pphp                                1/1     Running   0          24s
kube-system          kube-apiserver-kind-control-plane            1/1     Running   0          36s
kube-system          kube-controller-manager-kind-control-plane   1/1     Running   0          38s
kube-system          kube-proxy-wgdcd                             1/1     Running   0          24s
kube-system          kube-scheduler-kind-control-plane            1/1     Running   0          38s
local-path-storage   local-path-provisioner-684f458cdd-24lz8      1/1     Running   0          23s

How to reproduce it (as minimally and precisely as possible): Create a kind cluster and try to access it with kubectl:

$ kind create cluster
$ kubectl get pods

Anything else we need to know?:

Environment:

  • kind version: (use kind version): v0.17.0
  • Runtime info: (use docker info or podman info):
Client:
 Context:    default
 Debug Mode: false
 Plugins:
  buildx: Docker Buildx (Docker Inc., v0.10.0)
  compose: Docker Compose (Docker Inc., v2.15.1)
  dev: Docker Dev Environments (Docker Inc., v0.0.5)
  extension: Manages Docker extensions (Docker Inc., v0.2.17)
  sbom: View the packaged-based Software Bill Of Materials (SBOM) for an image (Anchore Inc., 0.6.0)
  scan: Docker Scan (Docker Inc., v0.23.0)

Server:
 Containers: 2
  Running: 2
  Paused: 0
  Stopped: 0
 Images: 6
 Server Version: 20.10.22
 Storage Driver: overlay2
  Backing Filesystem: extfs
  Supports d_type: true
  Native Overlay Diff: true
  userxattr: false
 Logging Driver: json-file
 Cgroup Driver: cgroupfs
 Cgroup Version: 1
 Plugins:
  Volume: local
  Network: bridge host ipvlan macvlan null overlay
  Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
 Swarm: inactive
 Runtimes: runc io.containerd.runc.v2 io.containerd.runtime.v1.linux
 Default Runtime: runc
 Init Binary: docker-init
 containerd version: 9ba4b250366a5ddde94bb7c9d1def331423aa323
 runc version: v1.1.4-0-g5fd4c4d
 init version: de40ad0
 Security Options:
  seccomp
   Profile: default
 Kernel Version: 5.15.83.1-microsoft-standard-WSL2
 Operating System: Docker Desktop
 OSType: linux
 Architecture: x86_64
 CPUs: 24
 Total Memory: 31.31GiB
 Name: docker-desktop
 ID: UGNI:7MW5:WVNA:G7RK:AMD2:DRPE:I4BF:XJHU:LCFV:FLWY:6DRZ:JSEU
 Docker Root Dir: /var/lib/docker
 Debug Mode: false
 HTTP Proxy: http.docker.internal:3128
 HTTPS Proxy: http.docker.internal:3128
 No Proxy: hubproxy.docker.internal
 Registry: https://index.docker.io/v1/
 Labels:
 Experimental: false
 Insecure Registries:
  hubproxy.docker.internal:5000
  127.0.0.0/8
 Live Restore Enabled: false

WARNING: No blkio throttle.read_bps_device support
WARNING: No blkio throttle.write_bps_device support
WARNING: No blkio throttle.read_iops_device support
WARNING: No blkio throttle.write_iops_device support
  • OS (e.g. from /etc/os-release): Arch on WSL
  • Kubernetes version: (use kubectl version): 1.25.4
  • Any proxies or other special environment settings?:
@Jont828 Jont828 added the kind/bug Categorizes issue or PR as related to a bug. label Feb 13, 2023
@amarsgithub
Copy link

Also facing the same issue on kind v0.17.0 on Ubuntu WSL with kubectl version 1.25.3

I tried kind export kubeconfig but that doesn't work either.

@BenTheElder
Copy link
Member

The kubeconfig won't change across reboots.

Sounds like networking broke somewhere in the stack on reboot, I would guess #3054 / #3059

@BenTheElder
Copy link
Member

Can either of your confirm what iptables is in use on the linux VM?

@BenTheElder
Copy link
Member

Sorry, #3054 is for the other direction, node to host.

For host to node, if api-server is indeed listening (which seems likely, given the pod is healthy and reachable within the nodes), this is probably a problem with docker.

If you can exec into the linux host, you can check if the api server is reachable within the node, and we'll know if it's host => WSL2 at issue.

I don't develop on windows and we do not have windows CI #1529, so I'll need your help pursuing those possibilities.

@Jont828
Copy link
Author

Jont828 commented Feb 13, 2023

Here's the output of iptables --list. I'm able to run kubectl get pods in the Docker ps so I think that means the API server is reachable.

Chain INPUT (policy ACCEPT)
target     prot opt source               destination
KUBE-NODEPORTS  all  --  anywhere             anywhere             /* kubernetes health check service ports */
KUBE-EXTERNAL-SERVICES  all  --  anywhere             anywhere             ctstate NEW /* kubernetes externally-visible service portals */
KUBE-FIREWALL  all  --  anywhere             anywhere

Chain FORWARD (policy ACCEPT)
target     prot opt source               destination
KUBE-FORWARD  all  --  anywhere             anywhere             /* kubernetes forwarding rules */
KUBE-SERVICES  all  --  anywhere             anywhere             ctstate NEW /* kubernetes service portals */
KUBE-EXTERNAL-SERVICES  all  --  anywhere             anywhere             ctstate NEW /* kubernetes externally-visible service portals */

Chain OUTPUT (policy ACCEPT)
target     prot opt source               destination
KUBE-SERVICES  all  --  anywhere             anywhere             ctstate NEW /* kubernetes service portals */
KUBE-FIREWALL  all  --  anywhere             anywhere

Chain KUBE-EXTERNAL-SERVICES (2 references)
target     prot opt source               destination

Chain KUBE-FIREWALL (2 references)
target     prot opt source               destination
DROP       all  --  anywhere             anywhere             /* kubernetes firewall for dropping marked packets */ mark match 0x8000/0x8000
DROP       all  -- !127.0.0.0/8          127.0.0.0/8          /* block incoming localnet connections */ ! ctstate RELATED,ESTABLISHED,DNAT

Chain KUBE-FORWARD (1 references)
target     prot opt source               destination
DROP       all  --  anywhere             anywhere             ctstate INVALID
ACCEPT     all  --  anywhere             anywhere             /* kubernetes forwarding rules */ mark match 0x4000/0x4000
ACCEPT     all  --  anywhere             anywhere             /* kubernetes forwarding conntrack rule */ ctstate RELATED,ESTABLISHED

Chain KUBE-KUBELET-CANARY (0 references)
target     prot opt source               destination

Chain KUBE-NODEPORTS (1 references)
target     prot opt source               destination

Chain KUBE-PROXY-CANARY (0 references)
target     prot opt source               destination

Chain KUBE-SERVICES (2 references)
target     prot opt source               destination

@Jont828
Copy link
Author

Jont828 commented Feb 13, 2023

I wonder if it's a problem with the server URL. In the Docker container for the control plane, the server in /etc/kubernetes/admin.conf is https://kind-control-plane:6443 while in .kube/config on my machine it's https://127.0.0.1:43633.

@Jont828
Copy link
Author

Jont828 commented Feb 14, 2023

I found that this issue is a bug with WSL and downgrading from WSL v1.1.2 to v1.0.3 fixes it.

@Jont828 Jont828 closed this as completed Feb 14, 2023
@BenTheElder
Copy link
Member

Thanks for confirming!! 🙏

@BenTheElder BenTheElder added kind/external upstream bugs and removed kind/bug Categorizes issue or PR as related to a bug. labels Feb 14, 2023
@JacopoBonta
Copy link

I still have this issue even after downgrading WSL to v1.0.3

@BenTheElder
Copy link
Member

The current recommendation is to upgrade to the latest pre-release #3180

@BenTheElder
Copy link
Member

I wonder if it's a problem with the server URL. In the Docker container for the control plane, the server in /etc/kubernetes/admin.conf is https://kind-control-plane:6443 while in .kube/config on my machine it's https://127.0.0.1:43633.

no that's expected. 127.0.0.1:$random_port is a port forward from the host to the api-server. Which isn't usable inside the containers because loopback addresses are network namespace local

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/external upstream bugs
Projects
None yet
Development

No branches or pull requests

4 participants