tProxy fallbacks after receiving a SetExtranoncePrefix from upstream#247
tProxy fallbacks after receiving a SetExtranoncePrefix from upstream#247GitGab19 wants to merge 2 commits intostratum-mining:mainfrom
SetExtranoncePrefix from upstream#247Conversation
…refix handler Introduced a new error variant, ExtranoncePrefixChanged, to TproxyErrorKind to handle cases where the extranonce prefix changes after channel creation. Updated the mining message handler to log an error and trigger a fallback when SetExtranoncePrefix is received, indicating that Sv1 clients do not support dynamic extranonce changes.
|
Shouldn’t the solution here be to update the channel state? In the aggregated case, we would update the upstream' channel’s extranonce prefix and disconnect all downstream channels. In the non-aggregated mode, we would simply disconnect the downstream and close all upstream channels And we can let the downstream know about the extranonce change via setExtranonce message rather than performing disconnections? |
We cannot rely on the Sv1 I simply implemented what @plebhash suggested in the issue description, which is the fallback. But sure, another approach could be what you mentioned: disconnect all the clients, clean the old channels, let them reconnect, and create new channels. |
|
tbh when I wrote #139 my main goal was to highlight that the current implementation of but indeed, there's more than one way to solve this fallback was just the first thing that came to mind... and it feels the easiest approach forcing all Sv1 clients to re-connect could also be a viable approach (and even desirable, from an operational point of view), but I assume the solution would be substantially more complex? |
I feel the fallback here is conceptually incorrect. Either the tproxy should fully disconnect from everything and perform a clean reconnect, or it should keep the upstream connection alive and explicitly reinitialize the lifecycle of all subsystems. This was previously deferred because we needed something like a DownstreamAll shutdown channel to ensure all downstream connections are properly dropped. With the ongoing lifecycle management refactor, I expect this to be addressed there. That refactor should provide the right foundation to implement the desired behaviour here. |
|
I also think that performing clients disconnection would be more correct in this case, and it seems we're all aligned on this. Given that @Shourya742 opened an issue for a possible future batched disconnection mechanism (#257), I would turn this PR into draft to recover it later in the future once we make progresses there. |
|
adapted description of #139 |
Closes #139