-
Couldn't load subscription status.
- Fork 32
feat(pdp): support create and add endpoint #692
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
64afc4a to
726cf4d
Compare
| return fmt.Errorf("expeted to find dataSetId in receipt but failed to extract: %w", err) | ||
| } | ||
| // XXX: I considered here chekcing if dataset exists already in DB, but not sure if it is needed | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The majority of my questions are around this file. I'm yet to test it, but I'm using dataset=0 as a sentinel value.
It might be better if it were maybe NULL. Also, I don't know about any possible table relations that this might affect.
@LexLuthr
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think you should look at handleGetPieceAdditionStatus, that's the only place I can see where it might matter - client can ask for piece addition status and it selects by data set id; what should the client ask for in the case of the combined flow and what do we expect them to get, because I think maybe none of it works currently. We call that with getPieceAdditionStatus in the SDK.
The only other place where we have functionality outside of this PR that impacts that table is the trigger we add for transaction resolving and it doesn't care about data set id:
curio/harmony/harmonydb/sql/20250730-pdp-rename.sql
Lines 155 to 170 in 435f7b8
| CREATE OR REPLACE FUNCTION update_pdp_data_set_piece_adds() | |
| RETURNS TRIGGER AS $$ | |
| BEGIN | |
| IF OLD.tx_status = 'pending' AND (NEW.tx_status = 'confirmed' OR NEW.tx_status = 'failed') THEN | |
| -- Update the add_message_ok field in pdp_data_set_piece_adds if a matching entry exists | |
| UPDATE pdp_data_set_piece_adds | |
| SET add_message_ok = CASE | |
| WHEN NEW.tx_status = 'failed' OR NEW.tx_success = FALSE THEN FALSE | |
| WHEN NEW.tx_status = 'confirmed' AND NEW.tx_success = TRUE THEN TRUE | |
| ELSE add_message_ok | |
| END | |
| WHERE add_message_hash = NEW.signed_tx_hash; | |
| END IF; | |
| RETURN NEW; | |
| END; | |
| $$ LANGUAGE plpgsql; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I could special-case dataset=0 in handleGetPieceAdditionStatus
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think the client should ask for the dataset creation status instead. After that, and after the piece gets processed, the get status will work because the pdp_data_set_piece_adds.data_set will get updated to the proper dataset.
|
@rvagg would appriciate you taking a look as well. |
|
|
||
| type RequestBody struct { | ||
| RecordKeeper string `json:"recordKeeper"` | ||
| Pieces []AddPieceRequest `json:"pieces"` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
oh yeah, nicely typed
|
Looks good, my only concern is the impact on |
4ac5aa6 to
3224e22
Compare
|
Looking at schema of I welcome suggestions on how to solve it cleanly, but it seems like transitioning to dataset being NULLable is the cleanest. |
|
I pushed a commit with a nullable dataset id in pdp_data_set_piece_adds, otherwise, the foreign key constraint would get in the way. |
|
Ohh, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looking at the table, I don't see any way to get around the problem while keeping the primary key.
tasks/pdp/proofset_addroot_watch.go
Outdated
| resolvedDataSetId := pieceAdd.DataSet | ||
| if !resolvedDataSetId.Valid { | ||
| var err error | ||
| resolvedDataSetId.Int64, err = extractDataSetIdFromReceipt(receipt) | ||
| if err != nil { | ||
| return fmt.Errorf("expeted to find dataSetId in receipt but failed to extract: %w", err) | ||
| } | ||
| resolvedDataSetId.Valid = true | ||
| var exists bool | ||
| // we check if the dataset exists already to avoid foreign key violation | ||
| err = db.QueryRow(ctx, ` | ||
| SELECT EXISTS ( | ||
| SELECT 1 | ||
| FROM pdp_data_sets | ||
| WHERE id = $1 | ||
| )`, resolvedDataSetId.Int64).Scan(&exists) | ||
| if err != nil { | ||
| return fmt.Errorf("failed to check if data set exists: %w", err) | ||
| } | ||
| if !exists { | ||
| // XXX: maybe return nil instead to avoid warning? | ||
| return fmt.Errorf("data set %d not found in pdp_data_sets", resolvedDataSetId.Int64) | ||
| } | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would prefer not try to create_watch here as well. Just scan from DB and if it is NULL then just skip processing addPiece for this tipset. But that might cause unnecessary delay of 1 tipset. Any other ideas do this this without 2 watches?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just scan from DB and if it is NULL then just skip processing addPiece for this tipset
That is what I'm doing here. If we get an add_piece triggered, and the dataset doesn't exist yet, we leave it alone and wait for another trigger for when the dataset is created.
A cleaner way might be to get create_watch to be processed with a higher priority than add_pieces. This could be achieved by combining them into one watcher and executing sequentially with create first.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
After processing it sequentially through the combined watcher, I think we could obtain the dataset ID from the pdp_data_sets based on the create_message_hash.
c18dbf2 to
f8f7507
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
these files probably should be renamed too 😬
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Outside of this PR, it doesn't cause any 'damage' right now and given that it is in pdpNext branch, I don't want extra conflicts.
| SubPieceSize int64 `db:"sub_piece_size"` | ||
| PDPPieceRefID int64 `db:"pdp_pieceref"` | ||
| AddMessageOK *bool `db:"add_message_ok"` | ||
| PDPDataSetId uint64 `db:"data_set"` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
hah, what was going on here ..
|
looks fine overall, just a bunch of relatively minor items in this review pass |
Signed-off-by: Jakub Sztandera <[email protected]>
Signed-off-by: Jakub Sztandera <[email protected]>
Signed-off-by: Jakub Sztandera <[email protected]>
Signed-off-by: Jakub Sztandera <[email protected]>
Signed-off-by: Jakub Sztandera <[email protected]>
…ith NULL Signed-off-by: Jakub Sztandera <[email protected]>
Signed-off-by: Jakub Sztandera <[email protected]>
Signed-off-by: Jakub Sztandera <[email protected]>
Signed-off-by: Jakub Sztandera <[email protected]>
Signed-off-by: Jakub Sztandera <[email protected]>
24f9639 to
d087dce
Compare
* feat(pdp): support create and upload Signed-off-by: Jakub Sztandera <[email protected]> * use nullable data_set id in pdp_data_set_piece_adds to bypass FK Signed-off-by: Jakub Sztandera <[email protected]> * combine pdp watchers for optimistic sequence create and add Signed-off-by: Jakub Sztandera <[email protected]> * Rename the migration file to trigger migration Signed-off-by: Jakub Sztandera <[email protected]> * Check error for insertPieceAdds in combined flow Signed-off-by: Jakub Sztandera <[email protected]> * pieceAddHandler: Remove WHERE dataSet = $id because it doesn't work with NULL Signed-off-by: Jakub Sztandera <[email protected]> * lint clean up and overwrite possible bad entry.DataSet Signed-off-by: Jakub Sztandera <[email protected]> * use decodeExtraData Signed-off-by: Jakub Sztandera <[email protected]> * log when transformAddPiecesRequest fails Signed-off-by: Jakub Sztandera <[email protected]> * Rename proofset -> data set in new files Signed-off-by: Jakub Sztandera <[email protected]> --------- Signed-off-by: Jakub Sztandera <[email protected]>
* feat(pdp): support create and upload Signed-off-by: Jakub Sztandera <[email protected]> * use nullable data_set id in pdp_data_set_piece_adds to bypass FK Signed-off-by: Jakub Sztandera <[email protected]> * combine pdp watchers for optimistic sequence create and add Signed-off-by: Jakub Sztandera <[email protected]> * Rename the migration file to trigger migration Signed-off-by: Jakub Sztandera <[email protected]> * Check error for insertPieceAdds in combined flow Signed-off-by: Jakub Sztandera <[email protected]> * pieceAddHandler: Remove WHERE dataSet = $id because it doesn't work with NULL Signed-off-by: Jakub Sztandera <[email protected]> * lint clean up and overwrite possible bad entry.DataSet Signed-off-by: Jakub Sztandera <[email protected]> * use decodeExtraData Signed-off-by: Jakub Sztandera <[email protected]> * log when transformAddPiecesRequest fails Signed-off-by: Jakub Sztandera <[email protected]> * Rename proofset -> data set in new files Signed-off-by: Jakub Sztandera <[email protected]> --------- Signed-off-by: Jakub Sztandera <[email protected]>
No description provided.