Correctly handle GPU usage and num_workers value
Currently it uses a GPU (cuda device) when the user requests one. This is not the right behavior, it should use it when one is available just like we did for U-FCN training worker or in standard U-FCN worker.
The num_workers
parameter is a bit trickier. Many issues have been opened on Pytorch and elsewhere about this parameter and how the DataLoader fails sometimes when its value is > 0. No fix has been issued by Pytorch so for now, the best thing to do is set it to 2 when there is a GPU available, and 0 on CPU. That's what we do for U-FCN training worker.