-
Notifications
You must be signed in to change notification settings - Fork 116
Parallelize init further #747
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Parallelize init further #747
Conversation
|
I've assigned @tnull as a reviewer! |
|
🔔 1st Reminder Hey @tnull! This PR has been waiting for your review. |
tnull
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This needs a rebase now. Mostly looks good otherwise, I think.
71fdb1d to
f173114
Compare
|
Rebased and added a trivial prefactor that moved the spawner to |
In a few commits as we upgrade LDK we'll use `RuntimeSpawner` outside of gossip, making it make much more sense to have it in `runtime.rs` instead.
Upstream LDK added the ability to read `ChannelMonitor`s from storage in parallel, which we switch to here.
Since I was editing the init logic anyway I couldn't resist going ahead and parallelizing various read calls. Since we added support for an async `KVStore` in LDK 0.2/ldk-node 0.7, we can now practically do initialization reads in parallel. Thus, rather than making a long series of read calls in `build`, we use `tokio::join` to reduce the number of round-trips to our backing store, which should be a very large win for initialization cost on those using remote storage (e.g. VSS). Sadly we can't trivially do all our reads in one go, we need the payment history to initialize the BDK wallet, which is used in the `Walet` object which is referenced in our `KeysManager`. Thus we first read the payment store and node metrics before moving on. Then, we need a reference to the `NetworkGraph` when we build the scorer. While we could/eventually should move to reading the *bytes* for the scorer while reading the graph and only building the scorer later, that's a larger refactor we leave for later. In the end, we end up with: * 1 round-trip to load the payment history and node metrics, * 2 round-trips to load ChannelMonitors and NetworkGraph (where there's an internal extra round-trip after listing the monitor updates for a monitor), * 1 round-trip to validate bitcoind RPC/REST access for those using bitcoind as a chain source, * 1 round-trip to load various smaller LDK and ldk-node objects, * and 1 additional round-trip to drop the rgs snapshot timestamp for nodes using P2P network gossip syncing for a total of 4 round-trips in the common case and 6 for nodes using less common chain source and gossip sync sources. We then have additional round-trips to our storage and chain source during node start, but those are in many cases already async.
f173114 to
78842ad
Compare
Based on #736, this uses the freshly-merged lightningdevkit/rust-lightning#4147 to parallelize
ChannelMonitorloading, and while we're here because I couldn't avoid taking the fun part (sorry @tnull) I went ahead andtokio::joined some additional reading during init.