Calico CNI. There is always few control plane nodes can not access the node ip/cluster ip. NodePort could all have access #9353
-
Cluster Env: 3 master nodes, and one worker node. Stacked Etcd. kubeadm init --pod-network-cidr 192.168.0.0/16 --control-plane-endpoint nlb-iiucdxea6hym5n9fax.cn-hongkong.nlb.aliyuncs.com --upload-certs kubectl create -f https://raw.githubusercontent.com/projectcalico/calico/v3.28.2/manifests/tigera-operator.yaml kubectl create -f https://raw.githubusercontent.com/projectcalico/calico/v3.28.2/manifests/custom-resources.yaml [root@master02 ~]# k get no -owide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
master01 Ready control-plane 6m38s v1.31.1 172.22.22.47 <none> Alibaba Cloud Linux 3.2104 U10 (OpenAnolis Edition) 5.10.134-17.2.al8.x86_64 containerd://1.7.22
master02 Ready control-plane 5m53s v1.31.1 172.29.84.5 <none> Alibaba Cloud Linux 3.2104 U10 (OpenAnolis Edition) 5.10.134-17.2.al8.x86_64 containerd://1.7.22
master03 Ready control-plane 5m45s v1.31.1 172.22.22.48 <none> Alibaba Cloud Linux 3.2104 U10 (OpenAnolis Edition) 5.10.134-17.2.al8.x86_64 containerd://1.7.22
worker01 Ready <none> 4m25s v1.31.1 172.22.22.46 <none> Alibaba Cloud Linux 3.2104 U10 (OpenAnolis Edition) 5.10.134-17.2.al8.x86_64 containerd://1.7.22
[root@master01 ~]# k run test --image nginx
pod/test created
[root@master01 ~]# k get po -owide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
test 1/1 Running 0 10s 192.168.5.4 worker01 <none> <none>
[root@master02 ~]# k get po -A -owide
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
calico-apiserver calico-apiserver-6b8c7bb6c5-fw8gl 1/1 Running 0 3m16s 192.168.241.66 master01 <none> <none>
calico-apiserver calico-apiserver-6b8c7bb6c5-tqnvx 1/1 Running 0 3m16s 192.168.5.2 worker01 <none> <none>
calico-system calico-kube-controllers-649bf9bff6-w2g2w 1/1 Running 0 4m21s 192.168.235.2 master03 <none> <none>
calico-system calico-node-lf2vx 1/1 Running 0 4m21s 172.22.22.46 worker01 <none> <none>
calico-system calico-node-m9g4x 1/1 Running 0 4m21s 172.22.22.47 master01 <none> <none>
calico-system calico-node-mwftr 1/1 Running 0 4m21s 172.22.22.48 master03 <none> <none>
calico-system calico-node-t98v8 1/1 Running 0 4m21s 172.29.84.5 master02 <none> <none>
calico-system calico-typha-5d49cd87b-9wv5l 1/1 Running 0 4m21s 172.22.22.46 worker01 <none> <none>
calico-system calico-typha-5d49cd87b-b499m 1/1 Running 0 4m12s 172.22.22.47 master01 <none> <none>
calico-system csi-node-driver-gqjxf 2/2 Running 0 4m21s 192.168.235.1 master03 <none> <none>
calico-system csi-node-driver-h58vc 2/2 Running 0 4m21s 192.168.5.1 worker01 <none> <none>
calico-system csi-node-driver-lsms6 2/2 Running 0 4m21s 192.168.241.65 master01 <none> <none>
calico-system csi-node-driver-szdhl 2/2 Running 0 4m21s 192.168.59.193 master02 <none> <none>
default test 1/1 Running 0 86s 192.168.5.4 worker01 <none> <none>
kube-system coredns-7c65d6cfc9-9pkpf 1/1 Running 0 6m51s 192.168.235.4 master03 <none> <none>
kube-system coredns-7c65d6cfc9-x5lp6 1/1 Running 0 6m51s 192.168.235.3 master03 <none> <none>
kube-system etcd-master01 1/1 Running 0 6m51s 172.22.22.47 master01 <none> <none>
kube-system etcd-master02 1/1 Running 0 6m6s 172.29.84.5 master02 <none> <none>
kube-system etcd-master03 1/1 Running 0 5m58s 172.22.22.48 master03 <none> <none>
kube-system kube-apiserver-master01 1/1 Running 0 6m51s 172.22.22.47 master01 <none> <none>
kube-system kube-apiserver-master02 1/1 Running 0 6m6s 172.29.84.5 master02 <none> <none>
kube-system kube-apiserver-master03 1/1 Running 0 5m58s 172.22.22.48 master03 <none> <none>
kube-system kube-controller-manager-master01 1/1 Running 0 6m51s 172.22.22.47 master01 <none> <none>
kube-system kube-controller-manager-master02 1/1 Running 0 6m6s 172.29.84.5 master02 <none> <none>
kube-system kube-controller-manager-master03 1/1 Running 0 5m57s 172.22.22.48 master03 <none> <none>
kube-system kube-proxy-26v9w 1/1 Running 0 4m40s 172.22.22.46 worker01 <none> <none>
kube-system kube-proxy-7gwb5 1/1 Running 0 6m8s 172.29.84.5 master02 <none> <none>
kube-system kube-proxy-fxt5s 1/1 Running 0 6m51s 172.22.22.47 master01 <none> <none>
kube-system kube-proxy-glqdb 1/1 Running 0 6m 172.22.22.48 master03 <none> <none>
kube-system kube-scheduler-master01 1/1 Running 0 6m51s 172.22.22.47 master01 <none> <none>
kube-system kube-scheduler-master02 1/1 Running 0 6m6s 172.29.84.5 master02 <none> <none>
kube-system kube-scheduler-master03 1/1 Running 0 5m50s 172.22.22.48 master03 <none> <none>
tigera-operator tigera-operator-89c775547-wzhtr 1/1 Running 0 4m30s 172.22.22.46 worker01 <none> <none>
Network Access:
Pods On different Nodes:
|
Beta Was this translation helpful? Give feedback.
Replies: 6 comments 2 replies
-
IP Addr:
|
Beta Was this translation helpful? Give feedback.
-
Please let me know the information you want to help debug! |
Beta Was this translation helpful? Give feedback.
-
Logs:
|
Beta Was this translation helpful? Give feedback.
-
typha |
Beta Was this translation helpful? Give feedback.
-
drivers
And calico-node |
Beta Was this translation helpful? Give feedback.
-
Sounds like an encapsulation problem to me, try:
To verify the change you can use |
Beta Was this translation helpful? Give feedback.
Sounds like an encapsulation problem to me, try:
To verify the change you can use
kubectl get ippool -o yaml
vxlan should be set to "Always"