Fixes for multi-node execution with torchrun + LocalExecutor in Slurm environment #251
Merged
ko3n1g merged 1 commit intoNVIDIA-NeMo:mainfrom Jul 19, 2025
Merged
Conversation
hemildesai
reviewed
Jun 3, 2025
marcromeyn
approved these changes
Jul 2, 2025
hemildesai
approved these changes
Jul 19, 2025
- do prepare stage only from single process or rank - for --node-rank, also look for SLURM_NODEID Signed-off-by: Pramod Kumbhar <prkumbhar@nvidia.com>
b7d57fe to
72db1af
Compare
zoeyz101
pushed a commit
to zoeyz101/NeMo-Run
that referenced
this pull request
Nov 12, 2025
…NeMo#251) - do prepare stage only from single process or rank - for --node-rank, also look for SLURM_NODEID Signed-off-by: Pramod Kumbhar <prkumbhar@nvidia.com> Signed-off-by: Zoey Zhang <zozhang@nvidia.com>
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Summary
--node-rankargument totorchrun, look forSLURM_NODEIDas wellIssues Addressed
Multi-node execution with torchrun+LocalExecutor was mentioned in #130 but I don't think this feature has been tested thoroughly. This PR fixes two issues that I saw while testing multi-node execution with torchrun + localexecutor:
1. args to torchrun are not expanded properly
In this case,
$${node_rank_var}is not expanded properly.2. prepare stage is not "protected" for execution from multiple ranks / processes
We typically run multi-node execution job as:
srun -N ${SLURM_NNODES} --ntasks-per-node=1 -n ${SLURM_NNODES} python train.pyAs multiple processes are executing from the beginning, we see errors like:
Testing
An example with torchrun+localexecutor:
Job script:
Additional Notes
Note that further improvements might be needed (as a separate PR) such as logging from only from single rank because currently we see some logging messages from all ranks:
But with this PR, I wanted to at least get a working example.