-
Notifications
You must be signed in to change notification settings - Fork 1
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add SUP: Batched Commitments for AltDA-based OP Stack Chains #1
base: main
Are you sure you want to change the base?
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Very clean proposal, thanks for this! Added some comments for things to discuss.
1. The derivation pipeline MUST: | ||
- Detect BatchedCommitmentType (0x02) and process accordingly | ||
- Extract and validate each sub-commitment independently | ||
- Maintain existing challenge mechanisms for each sub-commitment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The only current existing challenge mechanism is the DaChallengeContract for keccak commitments right? What's the current state of the conversation regarding getting rid of this in general given that most people are using generic commitments?
In our case we are considering potentially failing over to keccak commitments if ever EigenDA is down. The derivation pipeline must thus be able to switch dynamically between the two commitment types. Would one then need the DAChallengeContract to be used during the failover period?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah challenges are only for Keccak commitments right now. From my understanding (though I might be missing some context here), the idea is to eventually deprecate the existing keccak256 commitment type and use a specific DA Layer byte (0xff
) for keccak256. But my impression was that challenges would still be supported for them and eventually maybe even for other generic commitments?
For this proposal though we wanted to also support keccak because that is what Redstone currently uses. I would also like to better understand what the plan is for deprecating the existing keccak.
#### Aggregating Commitments in DA Server | ||
|
||
An alternative approach for constructing Batched Commitments is to do it in the DA Server. The batcher would send the concatenated frames to the server and the server would be in charge of parsing and constructing the resulting commitment. | ||
|
||
However, to keep sub-commitments individually challengeable (and not change the existing logic related to challenges) the server would need to store the input frame for each sub-commitment independently, which would break the server's API semantics. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yet another approach, for DA layers like EigenDA that allow huge blobs (right now up to 16MiB), is to change the batcher logic to allow for the channelMgr to either return huge frames, or to return multi-frames, and to send those as a single blob to the da layer.
This means there is no need to deal with batched commitments and sub-commitments like is being done here.
See for eg ethereum-optimism/optimism#12400
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Even though we could abstract this away in the DA server (and just hash all the frames or something), the main idea of the proposal was to still support challenges, and for that we need to be able to iterate over each individual sub-commitment. But I guess what you mention is still compatible with this approach? I imagine we would need some config option or maybe even an additional DaTypeBatchedAltDA
(building on top of ethereum-optimism/optimism#12400) so the batcher can differentiate between sending all the appended frames vs sending each frame independently.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Right in my mind the logic would be similar to ethereum-optimism/optimism#13169, where da dispersals are made in parallel to the da-server, and the returned "certs/commitment" would then be cached back into the channelMgr, to be pulled back out of order to follow holocene ordering rules.
Right now the channelMgr allows specifying a framesPerL1Tx
or something (this should be renamed to framesPerDADispersal
, which defines the number of frames submitted to the DA layer (6 for 4844 blobs, potentially a LOT for eigenda, etc). Then you would have another commitmentsPerL1Tx
which would tell you how many commitments to pull from the channelMgr, to be submitted as a batched-commitment tx.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Are you referring to ChannelConfig
's MaxFramesPerTx()
? So basically after that PR is merged, we would modify the batcher to wait until enough commitments (commitmentsPerL1Tx
) are cached in order to batch them?
- Submit each frame to the DA Server independently | ||
- Construct a valid batched commitment from returned commitments |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Probably should add note that aggregation of the sub-commitment needs to be done in order to follow new holocene ordering rules. See for eg ethereum-optimism/optimism#13169
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
looks good to me, have been playing around with a similar idea, so happy to get a formal spec that allows batching many subcommitments into a single one
This proposal describes a protocol upgrade that enables batching multiple DA commitments into a single L1 transaction for AltDA-based OP Stack chains. This introduces a new commitment type (
BatchedCommitmentType
) that allows multiple sub-commitments to be submitted in a single transaction while preserving individual challengeability, resulting in significant gas cost savings by sharing the base transaction cost across multiple commitments.Based on ethereum-optimism/specs#530