Skip to content

Conversation

@GeorgiosSmyrnis
Copy link
Collaborator

This adds an option that allows for predownloading of data at the start of each checkpoint to local storage. This helps with potential s3 errors.

@GeorgiosSmyrnis GeorgiosSmyrnis force-pushed the gsmyrnis/improve_s3_dl branch 2 times, most recently from 959d8f1 to 101a787 Compare May 19, 2024 20:21
@GeorgiosSmyrnis GeorgiosSmyrnis force-pushed the gsmyrnis/improve_s3_dl branch from 101a787 to d20b585 Compare May 22, 2024 04:59
stderr=subprocess.PIPE,
)
if result.returncode != 0:
raise RuntimeError(f"Error: Failed to download data to local storage: {result.stderr.decode('utf-8')}")
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should we try this a few times before erroring?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

do we need a barrier after the copy so the other workers don't start reading before worker 0 finishes copying?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'll add retries - there is already a dist barrier to make sure the training string is the same across all nodes, so no need to add more.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants