Conversation
e7d7bdd to
40f2a41
Compare
06d8930 to
1d84c3d
Compare
| /// Represents a downstream client connected to this node. | ||
| #[derive(Clone)] | ||
| pub struct Downstream { | ||
| pub downstream_data: Arc<Mutex<DownstreamData>>, |
There was a problem hiding this comment.
We can move DownstreamData fields to Downstream struct here.
There was a problem hiding this comment.
please note that this is simply following the same pattern as ChannelManager
I'd be happy to follow the suggestion above, but if DownstreamData is superfluous here, why isn't it on ChannelManager?
the immediate knee-jerk answer is probably number of fields (few here vs multple on ChannelManager), under the rationale that there's not enough fields to justify a separate struct
but this would fall into the pattern fragmentation issue I've been flagging, which usually introduces unnecessary friction to reason about the code
tbh I ultimately don't really care whether we:
- keep
DownstreamDataas-is - move
DownstreamDatafields toDownstreamstruct
if I really had to pick one, I'd keep it as-is, for the reasons described above
but also no energy to dive into a long debate about this, so if we're opinionated and want to proceed with this debt of extra fragmentation, I'm not going to fight it
cc @GitGab19
There was a problem hiding this comment.
We are changing this in dashmap PR for Pool. But I won't block your PR on this. Feel free to resolve.
There was a problem hiding this comment.
ok if we're already aware of this, moving DownstreamData fields to Downstream struct
There was a problem hiding this comment.
@Shourya742 can you do a sanity check on the adaptations I introduced here?
2ba3e49 to
e8a35c3
Compare
There was a problem hiding this comment.
renaming pool-config.toml.template -> pool-jds-config.toml.template
and making embedded-jds as default
not sure if it's the right move, @lucasbalieiro @GitGab19 please sanity check?
There was a problem hiding this comment.
this is gonna break other things on the build process.
And also will probably need to adapt some tricks to run the apps with/without JDP. As we do with the --profile nowadays
If it gets too complex to explain, I'll post a commit so you can cherry-pick it.
I'm still checking the fallout
There was a problem hiding this comment.
yeah please share commit
| # Protocol Extensions Configuration | ||
| # Extensions that the pool supports (will accept if requested by clients) | ||
| # Comment/uncomment to enable/disable specific extensions: | ||
| supported_extensions = [ |
There was a problem hiding this comment.
not sure if this is needed
There was a problem hiding this comment.
confirmed this is not needed
e8a35c3 to
cbfa504
Compare
|
@lucasbalieiro can you please do a sanity check on cbfa504? |
|
so far I haven't added any new integration tests... might add some before we merge this (open to suggestions) but marking it as ready for review... hopefully CI should be all green now |
cbfa504 to
afae806
Compare
pool-apps/pool/config-examples/mainnet/pool-jds-config-bitcoin-core-ipc-example.toml
Outdated
Show resolved
Hide resolved
2dadb44 to
ea5cc2b
Compare
|
the issue @lucasbalieiro reported on today's dev call is a potential race-condition bug on In this PR, I created this gist and asked @Sjors to reproduce, triage and sanity-check before we report to it's non-deterministic and more easily reproducible via docker than moving forward, we have a few options:
personally speaking, I'd lean towards the second option whenever it gets fixed (either on v31 or some backport v30.3 or v31.1), hopefully we won't have to worry about it anymore |
|
I also think we could proceed with the second option, even though it's still not clear to me how we can harden it on docker side (cc @lucasbalieiro). |
|
The gist refers to the 30.x Bitcoin Core branch, which has known crash issues (when the client disconnects mid wait). The fixes are in master, and I'm not sure if they will be back-ported or if it's better to wait for v31. Maybe use master on CI until a 31.x branch exists? |
|
thanks for the input @Sjors
v31 is introducing breaking changes on
this was detected on manual testing by @lucasbalieiro , not on CI also, using master would require us to bring the scope of #318 here
during this investigation I asked here are the steps it's suggesting, for which a sanity check would be appreciated
@lucasbalieiro can you try to elaborate a commit for me to cherry-pick here? either to replace ea5cc2b, or to be added on top of it... let me know which approach you think it's better |
|
Under the old code I think you can avoid crashes by calling interrupt on |
we already do like I mentioned above, we can try to mitigate it at a docker level (depending on @lucasbalieiro analysis). but I’m hesitant to add extra lower-level interrupt wiring for |
|
Adding integration tests:
|
Changed BitcoinCoreSv2TDP::handler_* methods from pub to pub(crate) to better encapsulate internal implementation: - handle_coinbase_output_constraints() - handle_request_transaction_data() - handle_submit_solution() These handlers are only called internally by the run() loop and monitors. External crates (pool, jd-client) communicate via channels and never call handlers directly, so they should not be part of the public API contract. This improves API clarity by making it explicit that the channel-based interface is the intended way to interact with BitcoinCoreSv2TDP.
…x_data skips coinbase tx
ea5cc2b to
9264728
Compare
|
@plebhash, regarding the Docker changes: You can cherry-pick this commit: lucasbalieiro@7f2d9db and apply it on top of you commit f8c8907 Or merge the two. Up to you whether that’s the preferred approach. In my commit I remove Dropped Another change adjusts how signals are handled by the Docker engine. This solution works on macOS; it still needs testing on Linux. Regarding the nerding out about the signal handling issue:When running However, our applications expect The solution is based on this option in the Docker Compose docs: Other approaches I tried but rejected:
|
|
thanks @lucasbalieiro just to clarify the crash path in Bitcoin Core (rephrasing your point): our graceful path is triggered when the app receives Ctrl+C ( if the process is terminated abruptly ( this is not Docker-specific; similar behavior can happen with however these are all highly unlikely scenarios on regular UX use-cases so the goal here is mitigation, not a full fix: we should avoid adding heavy local shutdown-workaround complexity in this PR, and instead harden Docker behavior to reduce exposure. lucasbalieiro@7f2d9db does that by aligning container stop behavior with our graceful path ( root fix remains upstream in Bitcoin Core/libmultiprocess (with fixes already in master; v31 coming out soon), so I’ll cherry-pick this as an interim mitigation. |
|
After finding 1+ blocks the JDC sends 2026-03-13T20:41:00.847773Z WARN jd_client_sv2::channel_manager::jd_message_handler: Received: DeclareMiningJobError(request_id: 6, error_code: invalid-mining-job-token, error_details: B064K())
2026-03-13T20:41:00.847789Z WARN jd_client_sv2::channel_manager::jd_message_handler: ⚠️ JDS refused the declared job with a DeclareMiningJobError ❌. Starting fallback mechanism.
2026-03-13T20:41:00.847869Z ERROR jd_client_sv2::channel_manager: Error handling JDS message error=JDCError { kind: DeclareMiningJobError, action: Fallback, _owner: PhantomData<jd_client_sv2::error::ChannelManager> }
2026-03-13T20:41:00.847908Z DEBUG jd_client_sv2::status: Sending status from ChannelManager: UpstreamShutdownFallback(DeclareMiningJobError)This trigger the fallback mechanism in JDC, drops all clients and flags the JDS as a malicious upstream Also, another problem is that the fallback mechanism does NOT await for the jd_client port of the downstream server to be free generating the error: 2026-03-13T20:41:01.860854Z ERROR jd_client_sv2::channel_manager: Failed to bind downstream server at 127.0.0.1:34265 error=Os { code: 98, kind: AddrInUse, message: "Address already in use" }This problem happened consistently on my setup after finding 1+ blocks. Attach the logs from a testing session @ 69f7447 |

close #24
jd_server_sv2bitcoin_core_sv2pool_sv2leveragingjd_server_sv2