Skip to content

zhuxuanziwang/parallel_minimal

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

3 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Minimal DDP / FSDP / ZeRO Examples

These scripts use the same toy model and random dataset so you can focus on the parallel strategy differences.

Files

  • common.py: shared model, dataset, and config.
  • ddp_min.py: DistributedDataParallel baseline.
  • fsdp_min.py: FullyShardedDataParallel (parameter sharding).
  • zero_min.py: DDP + ZeroRedundancyOptimizer (ZeRO stage-1 style optimizer sharding).

Environment

  • Python 3.10+
  • PyTorch with CUDA + NCCL
  • Multi-GPU machine

Run (4 GPUs)

cd parallel_minimal

torchrun --standalone --nproc_per_node=4 ddp_min.py
torchrun --standalone --nproc_per_node=4 fsdp_min.py
torchrun --standalone --nproc_per_node=4 zero_min.py

Run (1 GPU for quick smoke)

cd parallel_minimal

torchrun --standalone --nproc_per_node=1 ddp_min.py
torchrun --standalone --nproc_per_node=1 fsdp_min.py
torchrun --standalone --nproc_per_node=1 zero_min.py

What to Observe

  • DDP:
    • Full model replica on each GPU.
    • Gradients synchronized every step.
  • FSDP:
    • Parameters/gradients are sharded across GPUs.
    • Better memory scaling at the cost of more communication complexity.
  • ZeRO (here, stage-1 style):
    • Model is still replicated (DDP), but optimizer states are sharded.
    • Memory savings mainly from optimizer states.

Notes

  • zero_min.py uses PyTorch ZeroRedundancyOptimizer, which is conceptually ZeRO stage-1.
  • If you want ZeRO stage-2/3, use DeepSpeed or FSDP-style full sharding.

parallel_minimal

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages