Item status check: error on maximum item size exceedance and test with specific identifier/access key#485
Conversation
…h specific identifier/access key Fixes jjjake#293
|
Also replaces the abandoned PR #297 |
|
Thanks again @JustAnotherArchivist! This looks good, but I'm actually going to look into what it'd take to get item_size/files_count limit info from s3.us.archive.org rather than hard-coding it here. I'll keep you posted. |
| 'Expect 503 SlowDown errors.', | ||
| file=sys.stderr) | ||
| sys.exit(1) | ||
| elif item.item_size >= MAX_ITEM_SIZE: |
There was a problem hiding this comment.
| elif item.item_size >= MAX_ITEM_SIZE: | |
| elif item.item_size > MAX_ITEM_SIZE: |
There was a problem hiding this comment.
This would require some testing whether IA's servers still accept any upload (including an empty file) if the item is exactly 1 TiB. Might be tricky though since I think the metadata files, which get modified after every upload, also count towards the item size.
|
|
||
| # Status check. | ||
| if args['<identifier>']: | ||
| item = session.get_item(args['<identifier>']) |
There was a problem hiding this comment.
Do we really want to get an item that could be 1TB or more before we do a status-check?
There was a problem hiding this comment.
Well, we need the Item object for both the size status check and the actual upload. While this structure means we needlessly fetch the item metadata when S3 is overloaded, it avoids more complicated conditions (e.g. first run the S3 overload check if --status-check is present, then fetch item metadata if the identifier is present, then check the item size if both are present, then exit successfully if --status-check is present), which in my opinion leads to less readable code. Alternatives are two lines with get_item calls, which is just as ugly, or some sort of lazy evaluation, which is somewhat complicated to implement. So I found this to be the least awkward solution.
Fixes #293