Skip to content

Conversation

@TheBlueMatt
Copy link
Contributor

Based on #736, this uses the freshly-merged lightningdevkit/rust-lightning#4147 to parallelize ChannelMonitor loading, and while we're here because I couldn't avoid taking the fun part (sorry @tnull) I went ahead and tokio::joined some additional reading during init.

@ldk-reviews-bot
Copy link

ldk-reviews-bot commented Jan 8, 2026

I've assigned @tnull as a reviewer!
I'll wait for their review and will help manage the review process.
Once they submit their review, I'll check if a second reviewer would be helpful.

@ldk-reviews-bot ldk-reviews-bot requested a review from tnull January 8, 2026 20:21
@ldk-reviews-bot
Copy link

🔔 1st Reminder

Hey @tnull! This PR has been waiting for your review.
Please take a look when you have a chance. If you're unable to review, please let us know so we can find another reviewer.

Copy link
Collaborator

@tnull tnull left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This needs a rebase now. Mostly looks good otherwise, I think.

@TheBlueMatt TheBlueMatt force-pushed the 2026-01-parallel-monitors branch from 71fdb1d to f173114 Compare January 13, 2026 15:00
@TheBlueMatt
Copy link
Contributor Author

Rebased and added a trivial prefactor that moved the spawner to runtime.rs.

In a few commits as we upgrade LDK we'll use `RuntimeSpawner`
outside of gossip, making it make much more sense to have it in
`runtime.rs` instead.
Upstream LDK added the ability to read `ChannelMonitor`s from
storage in parallel, which we switch to here.
Since I was editing the init logic anyway I couldn't resist going
ahead and parallelizing various read calls. Since we added support
for an async `KVStore` in LDK 0.2/ldk-node 0.7, we can now
practically do initialization reads in parallel. Thus, rather than
making a long series of read calls in `build`, we use `tokio::join`
to reduce the number of round-trips to our backing store, which
should be a very large win for initialization cost on those using
remote storage (e.g. VSS).

Sadly we can't trivially do all our reads in one go, we need the
payment history to initialize the BDK wallet, which is used in the
`Walet` object which is referenced in our `KeysManager`. Thus we
first read the payment store and node metrics before moving on.
Then, we need a reference to the `NetworkGraph` when we build the
scorer. While we could/eventually should move to reading the
*bytes* for the scorer while reading the graph and only building
the scorer later, that's a larger refactor we leave for later.

In the end, we end up with:
 * 1 round-trip to load the payment history and node metrics,
 * 2 round-trips to load ChannelMonitors and NetworkGraph (where
   there's an internal extra round-trip after listing the monitor
   updates for a monitor),
 * 1 round-trip to validate bitcoind RPC/REST access for those
   using bitcoind as a chain source,
 * 1 round-trip to load various smaller LDK and ldk-node objects,
 * and 1 additional round-trip to drop the rgs snapshot timestamp
   for nodes using P2P network gossip syncing

for a total of 4 round-trips in the common case and 6 for nodes
using less common chain source and gossip sync sources.

We then have additional round-trips to our storage and chain source
during node start, but those are in many cases already async.
@TheBlueMatt TheBlueMatt force-pushed the 2026-01-parallel-monitors branch from f173114 to 78842ad Compare January 13, 2026 15:07
@tnull tnull merged commit e6293bd into lightningdevkit:main Jan 13, 2026
21 of 29 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants