You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Communication is container => port exposed on Docker Host IP mapped to => container.
Containers are each sitting in different subnet and as such routing goes as above.
The
but UDP show additional source nat (masquerade) to IP of docker on the destination container subnet, not only when going down the path to the container, but what is worse on the way back from server container back to client.
10.20.9.10 - real client IP (behind VPN and outside of docker IP space), which reaches docker via VPN gateway
172.20.2.0/24 - subnet on which VPN gateway container is,
192.168.0.5 - real docker host IP on its LAN interface where ports are exposed.,
172.20.0.4 - container of which ports are exposed on docker host IP (let's call it server),
172.20.0.1 - docker IP on that server container subnet
172.20.2.1 - docker IP on the VPN gateway container.
The problem is that the reply packet is effectively reaching the container initiating communication with wrong src IP (172.20.2.1) on the subnet of the client container which in this case is VPN gateway. As result, whilst packet is forwarded back via VPN, is dropped on client side as is coming with wrong IP.
Surprisingly nothing like this happens for TCP communication.
There are no manual iptables nat rules which would cause situation like this.
Tests were performed with high ports too to exclude specific behavior on port 53.
Client:
Version: 18.09.7
API version: 1.39
Go version: go1.12.6
Git commit: 18.09.7
Built: Wed Nov 27 08:45:51 UTC 2019
OS/Arch: linux/arm
Experimental: false
Server:
Engine:
Version: 18.09.7
API version: 1.39 (minimum version 1.12)
Go version: go1.12.6
Git commit: 18.09.7
Built: Wed Nov 27 08:45:51 UTC 2019
OS/Arch: linux/arm
Experimental: true
Unfortunately can't simply bump docker version just for tests as it is CoreElec image.
Is it a known issue and I just couldn't find a reference ticket?
It is not about MASQUERADE for outgoing traffic, it's about it being applied to
a) traffic to container (from container),
b) returning traffic from container to container (no MASQUERADE should be applied here).
Ad. B. Looking at rules - I don't really get when/which rule applies it - couldn't identify one (flushed even full nat table and same thing was happening - weird to say at least). But could be due to some hiccup in iptables for NAT traffic maybe.
The text was updated successfully, but these errors were encountered:
Communication is container => port exposed on Docker Host IP mapped to => container.
Containers are each sitting in different subnet and as such routing goes as above.
The
What works is TCP
but UDP show additional source nat (masquerade) to IP of docker on the destination container subnet, not only when going down the path to the container, but what is worse on the way back from server container back to client.
Legend:
The problem is that the reply packet is effectively reaching the container initiating communication with wrong src IP (172.20.2.1) on the subnet of the client container which in this case is VPN gateway. As result, whilst packet is forwarded back via VPN, is dropped on client side as is coming with wrong IP.
Surprisingly nothing like this happens for TCP communication.
There are no manual iptables nat rules which would cause situation like this.
Tests were performed with high ports too to exclude specific behavior on port 53.
Unfortunately can't simply bump docker version just for tests as it is CoreElec image.
Is it a known issue and I just couldn't find a reference ticket?
It is not about MASQUERADE for outgoing traffic, it's about it being applied to
a) traffic to container (from container),
b) returning traffic from container to container (no MASQUERADE should be applied here).
Ad. B. Looking at rules - I don't really get when/which rule applies it - couldn't identify one (flushed even full nat table and same thing was happening - weird to say at least). But could be due to some hiccup in iptables for NAT traffic maybe.
The text was updated successfully, but these errors were encountered: