Skip to content

Commit 8130a00

Browse files
committed
add find_unused_parameters for ddp training
1 parent 45a836d commit 8130a00

File tree

1 file changed

+2
-1
lines changed

1 file changed

+2
-1
lines changed

segmentation/tool/train.py

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -142,7 +142,8 @@ def main_worker(gpu, ngpus_per_node, argss):
142142
nn.SyncBatchNorm.convert_sync_batchnorm(model)
143143
model.cuda()
144144
# model parallel (Note: During DDP Training, enable 'find_unused_parameters' to freeze repsurf)
145-
model = torch.nn.parallel.DistributedDataParallel(model, device_ids=[gpu], find_unused_parameters=True)
145+
model = torch.nn.parallel.DistributedDataParallel(model, device_ids=[gpu],
146+
find_unused_parameters='repsurf' in args.model)
146147
else:
147148
# model
148149
model.cuda()

0 commit comments

Comments
 (0)