Support DistributedDataParallel
All threads resolved!
All threads resolved!
Closes #116 (closed)
With this fix, we can
- Train on multiple GPUs
- Continue a training started with 1 GPU on multiple GPUs
- Continue a training started with multiple GPUs on 1 GPU
- Load any pre-trained model (trained with 0, 1 or more GPUs)
It also closes #143 (closed) and #118 (closed)
Edited by Yoann Schneider
Merge request reports
Activity
changed milestone to %DAN-P5: Improve training
added P2 label
assigned to @mboillet
added 2 commits
requested review from @yschneider
added 4 commits
-
69461e35...d38bab3e - 2 commits from branch
main
- 2aaf98aa - Support DistributedDataParallel
- e528e935 - Document training on multiple GPUs
-
69461e35...d38bab3e - 2 commits from branch
- Resolved by Mélodie Boillet
Also Closes #118 (closed)
enabled an automatic merge when the pipeline for 41872136 succeeds
Please register or sign in to reply