Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

“err: routing: not found” #559

Open
0702ccc opened this issue Apr 14, 2024 · 1 comment
Open

“err: routing: not found” #559

0702ccc opened this issue Apr 14, 2024 · 1 comment
Labels
kind/bug Categorizes issue or PR as related to a bug.

Comments

@0702ccc
Copy link

0702ccc commented Apr 14, 2024

What happened:
I'm trying to run edgemesh on a cluster. I have one master node and two workers, and one edge-node.

I followed the installation guide: https://edgemesh.netlify.app/zh/guide, enabled the edge Kube-API endpoint service for KubeEdge, and then manually downloaded and successfully applied it.

But when I conducted the test, I still couldn't access the services on the edge node. Upon checking the edge-mesh logs on the cloud node, I found the following:

I0414 15:27:28.257479       1 server.go:92] [1] Prepare agent to run
I0414 15:27:28.257609       1 netif.go:96] bridge device edgemesh0 already exists
I0414 15:27:28.257684       1 server.go:96] edgemesh-agent running on CloudMode
I0414 15:27:28.257700       1 server.go:99] [2] New clients
W0414 15:27:28.257707       1 client_config.go:617] Neither --kubeconfig nor --master was specified.  Using the inClusterConfig.  This might not work.
I0414 15:27:28.258276       1 server.go:106] [3] Register beehive modules
W0414 15:27:28.258296       1 module.go:37] Module EdgeDNS is disabled, do not register
I0414 15:27:28.258516       1 server.go:68] Using userspace Proxier.
I0414 15:27:28.658873       1 module.go:34] Module EdgeProxy registered successfully
I0414 15:27:28.660253       1 util.go:273] Listening to docker0 is meaningless, skip it.
I0414 15:27:28.660263       1 util.go:273] Listening to tunl0 is meaningless, skip it.
I0414 15:27:28.660267       1 util.go:273] Listening to edgemesh0 is meaningless, skip it.
I0414 15:27:28.660407       1 module.go:126] Configure libp2p.ForceReachabilityPrivate()
I0414 15:27:28.758416       1 module.go:181] I'm {12D3KooWPxmEcti3EaVEParig2rmggSuRiXzrgfnejgUmrZPkE8N: [/ip4/192.168.3.222/tcp/20006 /ip4/127.0.0.1/tcp/20006]}
I0414 15:27:28.758461       1 module.go:203] Bootstrapping the DHT
I0414 15:27:28.759397       1 tunnel.go:80] Starting MDNS discovery service
I0414 15:27:28.759437       1 tunnel.go:93] Starting DHT discovery service
I0414 15:27:28.759528       1 module.go:34] Module EdgeTunnel registered successfully
I0414 15:27:28.759535       1 server.go:112] [4] Cache beehive modules
I0414 15:27:28.759544       1 server.go:119] [5] Start all modules
I0414 15:27:28.759870       1 core.go:24] Starting module EdgeProxy
I0414 15:27:28.760649       1 core.go:24] Starting module EdgeTunnel
I0414 15:27:28.857932       1 config.go:135] "Starting endpoints config controller"
I0414 15:27:28.857957       1 shared_informer.go:240] Waiting for caches to sync for endpoints config
I0414 15:27:28.858244       1 config.go:317] "Starting service config controller"
I0414 15:27:28.858345       1 shared_informer.go:240] Waiting for caches to sync for service config
I0414 15:27:28.858422       1 tunnel.go:493] Starting relay finder
I0414 15:27:28.858501       1 loadbalancer.go:239] "Starting loadBalancer destinationRule controller"
I0414 15:27:28.858555       1 shared_informer.go:240] Waiting for caches to sync for loadBalancer destinationRule
I0414 15:27:28.958044       1 shared_informer.go:247] Caches are synced for endpoints config
I0414 15:27:28.959109       1 shared_informer.go:247] Caches are synced for loadBalancer destinationRule
I0414 15:27:28.959208       1 shared_informer.go:247] Caches are synced for service config
I0414 15:27:29.154758       1 tunnel.go:126] [MDNS] Discovery found peer: {12D3KooWJ4ZPvJeRsVsNEaN8WLLE6sJjE6dQvkibGSqbnPtNSEWz: [/ip4/127.0.0.1/tcp/20006 /ip4/192.168.3.169/tcp/20006]}
I0414 15:27:29.356327       1 tunnel.go:156] [MDNS] New stream between peer {12D3KooWJ4ZPvJeRsVsNEaN8WLLE6sJjE6dQvkibGSqbnPtNSEWz: [/ip4/127.0.0.1/tcp/20006 /ip4/192.168.3.169/tcp/20006]} success
I0414 15:27:29.455442       1 tunnel.go:205] Discovery service got a new stream from {12D3KooWJ4ZPvJeRsVsNEaN8WLLE6sJjE6dQvkibGSqbnPtNSEWz: [/ip4/192.168.3.169/tcp/20006]}
I0414 15:27:29.455507       1 tunnel.go:234] [MDNS] Discovery from lijt-node1 : {12D3KooWJ4ZPvJeRsVsNEaN8WLLE6sJjE6dQvkibGSqbnPtNSEWz: [/ip4/192.168.3.169/tcp/20006]}
I0414 15:27:29.455537       1 tunnel.go:192] [MDNS] Discovery to lijt-node1 : {12D3KooWJ4ZPvJeRsVsNEaN8WLLE6sJjE6dQvkibGSqbnPtNSEWz: [/ip4/127.0.0.1/tcp/20006 /ip4/192.168.3.169/tcp/20006]}
I0414 15:30:07.600832       1 tunnel.go:264] Could not find peer edge-node in cache, auto generate peer info: {12D3KooWShWPN2xpmkQANTTbdfjGFcUKn92EtvemDPgr6xFDD7gz: []}
E0414 15:30:07.605312       1 loadbalancer.go:686] "Dial failed" err="get proxy stream from edge-node error: new stream between edge-node: {12D3KooWShWPN2xpmkQANTTbdfjGFcUKn92EtvemDPgr6xFDD7gz: []} err: routing: not found"
E0414 15:30:07.856744       1 loadbalancer.go:686] "Dial failed" err="get proxy stream from edge-node error: new stream between edge-node: {12D3KooWShWPN2xpmkQANTTbdfjGFcUKn92EtvemDPgr6xFDD7gz: []} err: routing: not found"
E0414 15:30:08.358487       1 loadbalancer.go:686] "Dial failed" err="get proxy stream from edge-node error: new stream between edge-node: {12D3KooWShWPN2xpmkQANTTbdfjGFcUKn92EtvemDPgr6xFDD7gz: []} err: routing: not found"
E0414 15:30:09.359715       1 loadbalancer.go:686] "Dial failed" err="get proxy stream from edge-node error: new stream between edge-node: {12D3KooWShWPN2xpmkQANTTbdfjGFcUKn92EtvemDPgr6xFDD7gz: []} err: routing: not found"
E0414 15:30:11.360084       1 proxysocket.go:98] "Failed to connect to balancer" err="failed to connect to an endpoint" ```

Do I need to set up a relay node?

Due to the special characteristics of the master node, I attempted to deploy Edge Mesh on it but failed. Moreover, I filter kube-dns service and don't let edgemesh proxy it:

**What you expected to happen**:

``` $ kubectl exec -it alpine-test -- sh
(在容器环境内)
/ # curl hostname-svc:12345
hostname-edge-5c75d56dc4-rq57t ```


@0702ccc 0702ccc added the kind/bug Categorizes issue or PR as related to a bug. label Apr 14, 2024
@0702ccc
Copy link
Author

0702ccc commented Apr 14, 2024

And I check the IP-table on work nodes and edge node. Edgemesh is front of kube-proxy
‘’‘ Chain PREROUTING (policy ACCEPT 5 packets, 376 bytes)
pkts bytes target prot opt in out source destination
35 3611 KUBE-PORTALS-CONTAINER all -- * * 0.0.0.0/0 0.0.0.0/0 /* handle ClusterIPs; NOTE: this must be before the NodePort rules /
2 528 KUBE-NODEPORT-CONTAINER all -- * * 0.0.0.0/0 0.0.0.0/0 ADDRTYPE match dst-type LOCAL /
handle service NodePorts; NOTE: this must be the last rule in the chain */

Chain INPUT (policy ACCEPT 5 packets, 376 bytes)
pkts bytes target prot opt in out source destination

Chain OUTPUT (policy ACCEPT 42 packets, 2796 bytes)
pkts bytes target prot opt in out source destination
433 29522 KUBE-PORTALS-HOST all -- * * 0.0.0.0/0 0.0.0.0/0 /* handle ClusterIPs; NOTE: this must be before the NodePort rules /
360 23744 KUBE-NODEPORT-HOST all -- * * 0.0.0.0/0 0.0.0.0/0 ADDRTYPE match dst-type LOCAL /
handle service NodePorts; NOTE: this must be the last rule in the chain */

Chain POSTROUTING (policy ACCEPT 42 packets, 2796 bytes)
pkts bytes target prot opt in out source destination
85143 5655K KUBE-POSTROUTING all -- * * 0.0.0.0/0 0.0.0.0/0 /* kubernetes postrouting rules /
0 0 CNI-a2e0f4856e5dfb4dce930b3e all -- * * 10.88.0.14 0.0.0.0/0 /
name: "containerd-net" id: "9e50e8e98960f618c86b2ec15f97b7f39648a2cb96500332a08a2df7de1c2e89" /
0 0 CNI-004429f12cb01b286a8f3d41 all -- * * 10.88.0.16 0.0.0.0/0 /
name: "containerd-net" id: "8fe0a30e24804321a752a0259811fd6428669018f5e9fbde8d42190507f6725f" */

Chain CNI-004429f12cb01b286a8f3d41 (1 references)
pkts bytes target prot opt in out source destination
0 0 ACCEPT all -- * * 0.0.0.0/0 10.88.0.0/16 /* name: "containerd-net" id: "8fe0a30e24804321a752a0259811fd6428669018f5e9fbde8d42190507f6725f" /
0 0 MASQUERADE all -- * * 0.0.0.0/0 !224.0.0.0/4 /
name: "containerd-net" id: "8fe0a30e24804321a752a0259811fd6428669018f5e9fbde8d42190507f6725f" */

Chain CNI-a2e0f4856e5dfb4dce930b3e (1 references)
pkts bytes target prot opt in out source destination
0 0 ACCEPT all -- * * 0.0.0.0/0 10.88.0.0/16 /* name: "containerd-net" id: "9e50e8e98960f618c86b2ec15f97b7f39648a2cb96500332a08a2df7de1c2e89" /
0 0 MASQUERADE all -- * * 0.0.0.0/0 !224.0.0.0/4 /
name: "containerd-net" id: "9e50e8e98960f618c86b2ec15f97b7f39648a2cb96500332a08a2df7de1c2e89" */

Chain KUBE-KUBELET-CANARY (0 references)
pkts bytes target prot opt in out source destination

Chain KUBE-MARK-DROP (0 references)
pkts bytes target prot opt in out source destination
0 0 MARK all -- * * 0.0.0.0/0 0.0.0.0/0 MARK or 0x8000

Chain KUBE-MARK-MASQ (0 references)
pkts bytes target prot opt in out source destination
0 0 MARK all -- * * 0.0.0.0/0 0.0.0.0/0 MARK or 0x4000

Chain KUBE-NODEPORT-CONTAINER (1 references)
pkts bytes target prot opt in out source destination

Chain KUBE-NODEPORT-HOST (1 references)
pkts bytes target prot opt in out source destination

Chain KUBE-PORTALS-CONTAINER (1 references)
pkts bytes target prot opt in out source destination
0 0 DNAT tcp -- * * 0.0.0.0/0 10.98.250.4 /* default/hostname-svc:http-0 / tcp dpt:12345 to:169.254.96.16:35907
0 0 DNAT tcp -- * * 0.0.0.0/0 10.99.46.39 /
default/nginx-service / tcp dpt:80 to:169.254.96.16:43323
0 0 DNAT tcp -- * * 0.0.0.0/0 10.102.217.88 /
kube-system/metrics-server:https */ tcp dpt:443 to:169.254.96.16:46175

Chain KUBE-PORTALS-HOST (1 references)
pkts bytes target prot opt in out source destination
0 0 DNAT tcp -- * * 0.0.0.0/0 10.98.250.4 /* default/hostname-svc:http-0 / tcp dpt:12345 to:169.254.96.16:35907
0 0 DNAT tcp -- * * 0.0.0.0/0 10.99.46.39 /
default/nginx-service / tcp dpt:80 to:169.254.96.16:43323
0 0 DNAT tcp -- * * 0.0.0.0/0 10.102.217.88 /
kube-system/metrics-server:https */ tcp dpt:443 to:169.254.96.16:46175

Chain KUBE-POSTROUTING (1 references)
pkts bytes target prot opt in out source destination
85140 5655K RETURN all -- * * 0.0.0.0/0 0.0.0.0/0 mark match ! 0x4000/0x4000
0 0 MARK all -- * * 0.0.0.0/0 0.0.0.0/0 MARK xor 0x4000
0 0 MASQUERADE all -- * * 0.0.0.0/0 0.0.0.0/0 /* kubernetes service traffic requiring SNAT */ random-fully ’‘’

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Categorizes issue or PR as related to a bug.
Projects
None yet
Development

No branches or pull requests

1 participant