Skip to content

Commit 67b7415

Browse files
committed
Rollup of PR #3642, 3df4ce8..70f7e33
history + Docu for setting up network tunnel for testing + Adjust to changed main API + Remove `ReplayMsgRef` types and use `ReplayPayloadRef` instead why The layout of the `ReplayMsgRef` sub-structures mirrors that of the `ReplayPayloadRef` sub-structures. So the latter sub-structures have been replaced by the former ones. + Re-factor `ReplayRef` inheritable by `ReplayRunnerRef` + Replace bit-mask controlled optional capture fields by `Opt[]` structures why Contrary to RLP which was not fully reliable at the time of defining the capture record layout, JSON is fully capable of handling `Opt[]` constructs. + Simplify/prettify code details No need to prefix capture type names by `Trt`, so removed Simplify reader switches by using template+mixin Using static table for pretty printing capture record labels + Refactor capture records for JSON format read/write + Using gunzip from updated `nim-zlib` package + Use `confutils` for command line options management + why No need for extra command line parsing stuff + Fix copyright years + Provide command line capture replay tool details This tool wraps and runs the `nimbus_execution_client` with the sync scheduler partially superseded by a capturing state data replay facility. + Update replay framework for full capture replay + Provide command line capture inspection tool + Provide capture inspection framework, part of replay details Currently only for dumping capture data as logs + Provide command line tracer tool details This tool wraps and runs the `nimbus_execution_client` while tracing and capturing state data. + Provide tracer framework with intercepting syncer session handlers
1 parent 0cf35bd commit 67b7415

38 files changed

+5092
-1
lines changed

Makefile

Lines changed: 12 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -393,8 +393,19 @@ evmstate_test: | build deps evmstate
393393
txparse: | build deps
394394
$(ENV_SCRIPT) nim c $(NIM_PARAMS) "tools/txparse/$@.nim"
395395

396+
# build syncer debugging and analysis tools
397+
SYNCER_TOOLS_DIR := tools/syncer
398+
SYNCER_TOOLS := $(foreach name,trace inspect replay,syncer_test_client_$(name))
399+
.PHONY: syncer-tools syncer-tools-clean $(SYNCER_TOOLS)
400+
syncer-tools: $(SYNCER_TOOLS)
401+
syncer-tools-clean:
402+
rm -f $(foreach exe,$(SYNCER_TOOLS),build/$(exe))
403+
$(SYNCER_TOOLS): | build deps rocksdb
404+
echo -e $(BUILD_MSG) "build/$@"
405+
$(ENV_SCRIPT) nim c $(NIM_PARAMS) -o:build/$@ "$(SYNCER_TOOLS_DIR)/$@.nim"
406+
396407
# usual cleaning
397-
clean: | clean-common
408+
clean: | clean-common syncer-tools-clean
398409
rm -rf build/{nimbus,nimbus_execution_client,nimbus_portal_client,fluffy,portal_bridge,libverifproxy,nimbus_verified_proxy,$(TOOLS_CSV),$(PORTAL_TOOLS_CSV),all_tests,test_kvstore_rocksdb,test_rpc,all_portal_tests,all_history_network_custom_chain_tests,test_portal_testnet,utp_test_app,utp_test,*.dSYM}
399410
rm -rf tools/t8n/{t8n,t8n_test}
400411
rm -rf tools/evmstate/{evmstate,evmstate_test}

tools/syncer/.gitignore

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1 @@
1+
*.html

tools/syncer/README.md

Lines changed: 312 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,312 @@
1+
Setting up a routing tunnel from a hidden Rfc 1918 network
2+
==========================================================
3+
4+
Routing bidirectional traffic selectively through a dedicated tunnel to a
5+
public server (e.g. a non-expensive rented *cloud* system) can emulate many
6+
properties of a publicly exposed system. This can be used for a test system
7+
hidden behind a digital subscriber-line or dial-up network so it can emulate
8+
many properties of a publicly exposed system.
9+
10+
The systems involved do typically share other services besides the ones
11+
installed for tunnelling. Some care must be taken on the local system when
12+
allowing incoming data connections. This can be mitigated by using network
13+
filters (e.g. `nftables` on *Linux*) for accepting incoming connections only
14+
on sub-systems, e.g. `qemu` or `docker` virtual hosts.
15+
16+
General layout
17+
--------------
18+
19+
The following scenario was set up for testing the syncer
20+
21+
EXTERN LOCAL SUB1
22+
+-------+ +-------+ +------+
23+
| | tunnel | | | |
24+
| o---l----------r---o o---s---+---a---o |
25+
| o | | | | | |
26+
+---|---+ +---|---+ | +------+
27+
x y |
28+
| | | SUB2
29+
----//----+----//------------//--+ | +------+
30+
internet internet MASQUERADE | | |
31+
+---b---o |
32+
| | |
33+
| +------+
34+
: ...
35+
36+
where *EXTERN* is a system with a public IP address (on interface **x**),
37+
fully exposed to the internet (e.g. a rented *cloud* server). *LOCAL* is a
38+
system on a local network (typically with Rfc 1918 or Rfc 5737 addresses)
39+
and has access to the internet via *SNAT* or *MASQUERADE* address translation
40+
techniques on interface **y**.
41+
42+
The system *LOCAL* accesses services on system *EXTERN* via the internet
43+
connection. An *EXTERN* -- *LOCAL* logical connection facilitated by
44+
interfaces **X** and **Y** allows for setting up a virtual peer-to-peer
45+
tunnel for general IP (UDP and TCP needed) between both systems. This tunnel
46+
is depicted above the dedicated *EXTERN* -- *LOCAL* connection with
47+
interfaces **l** and **r**.
48+
49+
The system *LOCAL* provides routing services to the internet for systems
50+
*SUB1*, *SUB2*, etc. via interface **s** on *LOCAL*. Technically, these
51+
sub-systems might run on a virtual system within the *LOCAL* system.
52+
53+
54+
Example interface and network addresses
55+
---------------------------------------
56+
57+
These addresses as used the below configuration scripts are listed here.
58+
59+
| interface | IP address | netmask | gateway | additional info
60+
|-----------| ----------------:|:--------|:--------------|-----------------
61+
| **a** | 192.168.122.22 | /24 | 192.168.122.1 |
62+
| **b** | 192.168.122.23 | /24 | 192.168.122.1 |
63+
| **l** | 10.3.4.1 | /32 | n/a | point-to-point
64+
| **r** | 10.3.4.2 | /32 | n/a | point-to-point
65+
| **s** | 192.168.122.1 | /24 | |
66+
| **x** | <server-address> | | | public address
67+
| **y** | | | 172.17.33.1 | dynamic, DHCP
68+
69+
70+
Why not using *ssh* or any other TCP tunnel software
71+
----------------------------------------------------
72+
73+
With *ssh*, one can logically pre-allocate a list of TCP connections between
74+
two systems. This sets up listeners on the one end of the tunnel and comes out
75+
on the other end when an application is connecting to a listener. It is most
76+
easily set up and provides reliable, encrypted connections. But this does not
77+
the tunnel wanted here.
78+
79+
In another *ssh* mode, one can build a connection and tie it to a *pty* or
80+
a *tun* device. In the case of a *pty*, one can install a *ppp* connection
81+
on top of that. In either case, one ends up with a pair of network interfaces
82+
that could be used for implementing the **r**--**l** tunnel for the above
83+
scenario.
84+
85+
Unfortunately, that scenario works only well in some rare cases (probably on
86+
a *LAN*) for TCP over *ssh*, the reason being that TCP traffic control will
87+
adjust simultaneously: the outer *ssh* TCP connection and the inner TCP data
88+
connection (see details on
89+
[PPP over *ssh*](https://web.archive.org/web/20220103191127/http://sites.inka.de/bigred/devel/tcp-tcp.html)
90+
or [TCP over TCP](https://lsnl.jp/~ohsaki/papers/Honda05_ITCom.pdf).)
91+
92+
93+
Suitable **r**--**l** tunnel software solutions
94+
-----------------------------------------------
95+
96+
The software package used here is `quicktun` which runs a single UDP based
97+
peer-to-peer tunnel and provides several flavours of encryption.
98+
99+
Other solutions would be `openVPN` which provides multiple topologies with
100+
pluggable authentication and encryption, or `vtun` which provides a server
101+
centric star topology (with optional encryption considered weak.)
102+
103+
104+
Setting up the **r**--**l** tunnel on Debian bookworm
105+
-----------------------------------------------------
106+
107+
A detailed description on the `quicktun` software is available at
108+
[QuickTun](http://wiki.ucis.nl/QuickTun).
109+
110+
All command line commands displayed here must be run with administrator
111+
privileges, i.e. as user **root**.
112+
113+
Install tunnel software on *LOCAl* and *EXTERN* via
114+
115+
apt install quicktun
116+
117+
Generate and remember two key pairs using `keypair` twice. This gives keys
118+
119+
SECRET: <local-secret>
120+
PUBLIC: <local-public>
121+
122+
SECRET: <extern-secret>
123+
PUBLIC: <extern-public>
124+
125+
On *LOCAL* set it up as client. Install the file
126+
127+
/etc/network/interfaces.d/client-tun
128+
129+
with contents
130+
131+
# Do not use the automatic directive "auto tun0" here. This would take
132+
# up this tunnel interface too early. Rather use `ifup tun0` in
133+
# "/etc/rc.local". On Debian unless done so, this start up file can
134+
# be enabled via
135+
# chmod +x /etc/rc.local
136+
# systemctl enable rc-local.service
137+
#
138+
iface tun0 inet static
139+
# See http://wiki.ucis.nl/QuickTun for details. Contrary to the
140+
# examples there, comments must not follow the directives on the
141+
# same line to the right.
142+
address 10.3.4.2
143+
pointopoint 10.3.4.1
144+
netmask 255.255.255.255
145+
qt_local_address 0.0.0.0
146+
147+
# Explicit port number (default 2998)
148+
qt_remote_port 2992
149+
qt_remote_address <server-address>
150+
151+
# Available protocols: raw, nacl0, nacltai, salty
152+
qt_protocol nacl0
153+
qt_tun_mode 1
154+
155+
# This is the private tunnel key which should be accessible by
156+
# root only. Public access to this config file should be resticted
157+
# to root only, e.g. via
158+
# chmod go-rw <this-file>
159+
qt_private_key <local-secret>
160+
161+
# Server public key
162+
qt_public_key <extern-public>
163+
164+
# Make certain that tunnel packets can be sent via outbound
165+
# interface.
166+
up route add -host <server-address> gw 172.17.33.1 || true
167+
down route del -host <server-address> gw 172.17.33.1 || true
168+
169+
# Route virtual network data into the tunnel. To achieve this, two
170+
# routing tables are used: "main" and a local one "8". The "main"
171+
# table is the standard table, the local one "8" is used to route
172+
# a set of traffic into the tunnel interface. Routing tables "main"
173+
# or "8" are selected by the policy set up via
174+
# "ip rules add ... lookup <table>"
175+
up ip rule add from 192.168.122.1 lookup main || true
176+
up ip rule add from 192.168.122.0/24 lookup 8 || true
177+
up ip rule add from 10.3.4.2 lookup 8 || true
178+
up ip route add default via 10.3.4.1 table 8 || true
179+
up ip route add 192.168.122.0/24 via 192.168.122.1 table 8 || true
180+
181+
down ip rule del from 192.168.122.1 lookup main || true
182+
down ip rule del from 192.168.122.0/24 || true
183+
down ip rule del from 10.3.4.2 lookup 8 || true
184+
down ip route flush table 8 || true
185+
186+
# End
187+
188+
and on *EXTERN* set it up as server. Install the file
189+
190+
/etc/network/interfaces.d/server-tun
191+
192+
with contents
193+
194+
iface tun0 inet static
195+
address 10.3.4.1
196+
pointopoint 10.3.4.2
197+
netmask 255.255.255.255
198+
qt_remote_address 0.0.0.0
199+
200+
qt_local_port 2992
201+
qt_local_address <server-address>
202+
203+
qt_protocol nacl0
204+
qt_tun_mode 1
205+
206+
# Do not forget to run `chmod go-rw <this-file>
207+
qt_private_key <extern-secret>
208+
qt_public_key <local-public>
209+
210+
# Route into hidden sub-network which will be exposed after NAT.
211+
up route add -net 192.168.122.0 netmask 255.255.255.0 gw 10.3.4.1
212+
down route del -net 192.168.122.0 netmask 255.255.255.0 gw 10.3.4.1
213+
214+
On either system *EXTERN* and *LOCAL* make certain that the file
215+
216+
/etc/network/interfaces
217+
218+
contains a line
219+
220+
source /etc/network/interfaces.d/*
221+
222+
Then the tunnel can be established by running
223+
224+
ifup tun0
225+
226+
on either system. In order to verify, try running
227+
228+
ping 10.3.4.2 # on EXTERN
229+
ping 10.3.4.1 # on LOCAL
230+
231+
232+
Configuring `iptables` on the *EXTERN* server
233+
---------------------------------------------
234+
235+
As a suggestion for an `nftables` filter and NAT rules set on a *Linux* host
236+
*EXTERN* would be
237+
238+
#! /usr/sbin/nft -f
239+
240+
define wan_if = <server-interface>
241+
define wan_ip = <server-address>
242+
define tun_if = tun0
243+
244+
define gw_ip = 10.3.4.2
245+
define gw_ports = { 30600-30699, 9010-9019 }
246+
define h1_ip = 192.168.122.22
247+
define h1_ports = 30700-30799
248+
define h2_ip = 192.168.122.23
249+
define h2_ports = 9000-9009
250+
251+
table ip filter {
252+
# Accept all input and output
253+
chain INPUT { type filter hook input priority filter; policy accept; }
254+
chain OUTPUT { type filter hook output priority filter; policy accept; }
255+
256+
# Selective tunnel transit and NAT debris
257+
chain FORWARD {
258+
type filter hook forward priority filter; policy drop;
259+
ct state related,established counter accept
260+
iif $tun_if ct state new counter accept
261+
iif $tun_if counter accept
262+
iif $wan_if ct state new counter accept
263+
iif $wan_if counter accept
264+
counter log prefix "Tunnel Drop " level info
265+
counter drop
266+
}
267+
}
268+
table ip nat {
269+
chain INPUT { type nat hook input priority 100; policy accept; }
270+
chain OUTPUT { type nat hook output priority -100; policy accept; }
271+
272+
# Map new connection destination address depending on dest. port
273+
chain PREROUTING {
274+
type nat hook prerouting priority dstnat; policy accept;
275+
ip daddr $wan_ip tcp dport $h1_ports counter dnat to $h1_ip
276+
ip daddr $wan_ip udp dport $h1_ports counter dnat to $h1_ip
277+
ip daddr $wan_ip tcp dport $h2_ports counter dnat to $h2_ip
278+
ip daddr $wan_ip udp dport $h2_ports counter dnat to $h2_ip
279+
ip daddr $wan_ip tcp dport $gw_ports counter dnat to $gw_ip
280+
ip daddr $wan_ip udp dport $gw_ports counter dnat to $gw_ip
281+
}
282+
# Map new connection source address to wan address
283+
chain POSTROUTING {
284+
type nat hook postrouting priority srcnat; policy accept;
285+
oif $wan_if ip daddr $wan_ip counter return
286+
oif $wan_if ip saddr $gw_ip counter snat to $wan_ip
287+
oif $wan_if ip saddr $h1_ip counter snat to $wan_ip
288+
oif $wan_if ip saddr $h2_ip counter snat to $wan_ip
289+
}
290+
}
291+
292+
293+
Running Nimbus EL or CL on *LOCAL* client and/or *SUB1*. *SUB2*
294+
---------------------------------------------------------------
295+
296+
When starting `nimbus_execution_client` on *SUB1*, *SUB2*, etc. systems,
297+
one needs to set options
298+
299+
--engine-api-address=0.0.0.0
300+
--nat=extip:<server-address>
301+
302+
and for the `nimbus_beacon_node` on *SUB1*, *SUB2*, etc. use
303+
304+
--nat=extip:<server-address>
305+
306+
For running both, `nimbus_execution_client` and `nimbus_beacon_node`
307+
on *LOCAL* directly, one needs to set options
308+
309+
--listen-address=10.3.4.2
310+
--nat=extip:<server-address>
311+
312+
on either system.

0 commit comments

Comments
 (0)