-
Notifications
You must be signed in to change notification settings - Fork 907
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Bootnodes discovery in cloud #8400
Comments
Have you tried using the internal (private) Kubernetes IP and port in your bootnode list in the genesis file? |
When I connect nodes internally within a cluster, bootnode discovery works well since there is no cloud egress gateway in a such scenario. However, in my case, I need to connect nodes running in different clusters. To achieve this, I expose their P2P and discovery ports via a load balancer on each side. here's the example of how I expose ports:
the problem is that when node (1) of cluster A connects via UDP to node (2) of cluster B, the node (2) can't reach the sender of "PING" as it's sending "PONG" packet to the wrong address (cloud egress address) |
{
"nonce": "0x0",
"timestamp": "0x58ee40ba",
"extraData": "0xf87aa00000000000000000000000000000000000000000000000000000000000000000f85494c20680c38137a1af424850169158fd230a8ef7d2947e13e5d752d9d4ce4b43549b370486955d679ff19457d9a3330593d0cbce09e27eaee7843b0f0af01a94d96786bf2bef3a726ca3be3932fce85c1a23ad65c080c0",
"gasLimit": "0xFFFFFF",
"gasUsed": "0x0",
"number": "0x0",
"difficulty": "0x1",
"coinbase": "0x0000000000000000000000000000000000000000",
"mixHash": "0x63746963616c2062797a616e74696e65206661756c7420746f6c6572616e6365",
"parentHash": "0x0000000000000000000000000000000000000000000000000000000000000000",
"config": {
"chainId": 1337,
"homesteadBlock": 0,
"eip150Block": 0,
"eip150Hash": "0x0000000000000000000000000000000000000000000000000000000000000000",
"eip155Block": 0,
"eip158Block": 0,
"byzantiumBlock": 0,
"constantinopleBlock": 0,
"petersburgBlock": 0,
"istanbulBlock": 0,
"muirglacierblock": 0,
"berlinBlock": 0,
"londonBlock": 0,
"arrowGlacierBlock": 0,
"grayGlacierBlock": 0,
"zeroBaseFee": true,
"qbft": {
"blockperiodseconds": 15,
"epochlength": 30000,
"requesttimeoutseconds": 10
}
},
"alloc": {
"0x4b14b0061942d030b6027e82b948b2630f91e26b": {
"balance": "1000000000000000000000000000"
},
"0x138f1f5c05e04a4d7af601d37ecaf58a972c2f78": {
"balance": "1000000000000000000000000000"
},
"0xf29c13b63f58fa74a1116d6c0fcd68d9e112234a": {
"balance": "1000000000000000000000000000"
},
"0xe28a8b67a7a40f9c78fbffa7a3ed3941b6d32b37": {
"balance": "1000000000000000000000000000"
},
"0x79405f3a3bc26961d31b0f5ba7137e8efc90760d": {
"balance": "1000000000000000000000000000"
},
"0xbb027cf2106b3a77767326756b5f1208c2f90994": {
"balance": "1000000000000000000000000000"
},
"0x95e7cc48c2932d184883d949aecbb76a5d656e68": {
"balance": "1000000000000000000000000000"
}
}
} Organisation A Node 1 config
|
I just realised there’s one more issue related to this. Visual representation of the problem (let's assume we have node 1 running in cluster A and node 2 running in cluster B): node 1 config:
let's say we have the following cluster routing:
Discovery flow assuming node 2 sends PONG request to the port from a PING packet data:
The solution would be to allow setting |
Description
I’m trying to connect multiple Besu nodes of 2 organisations into a private network. Each organisation runs nodes in their cloud environment in Kubernetes. However, it seems impossible to configure bootnode discovery between them.
After some debugging I can see that during the bonding process, the node receiving the request attempts to respond to a port extracted from a UDP datagram (while it correctly retrieves the TCP port from the request data)
The root of the problem here is that in cloud environment it's a common practice to have different egress/ingress networks
Reference: https://github.com/hyperledger/besu/blob/main/ethereum/p2p/src/main/java/org/hyperledger/besu/ethereum/p2p/discovery/PeerDiscoveryAgent.java#L288
Acceptance Criteria
The text was updated successfully, but these errors were encountered: