Support pytorch memory profiler
Refs https://redmine.teklia.com/issues/4768
In order to track the memory leak happening during training, it would be super useful to get inside view of memory usage (outside observation is too messy).
The overall idea of the pytorch profiler is to encapsulate the code to analyze in a context manager.
We should focus on getting data from the training steps, as validation does not leak (it would be interesting to have that data too, but non blocking).
Goal n°1 would be to get raw JSON output as mentionned in the above link (with a trace_handler
that logs in /tmp/
If that initial goal is met, we should then try to use the tensorboard dedicated handler to be able to analyze through tensorboard.