Skip to content

Multi-GPUs DDP - How the dataset is distributed accross the GPUs #13342

Discussion options

You must be logged in to vote

I believe this line in PyTorch code explains it all:

indices = indices[self.rank:self.total_size:self.num_replicas]

Replies: 1 comment

Comment options

You must be logged in to vote
0 replies
Answer selected by KevinCrp
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
data handling Generic data-related topic strategy: ddp DistributedDataParallel
2 participants