To prevent the little drops of loss when the training is stopped (due to memory leak or a lack of time/epochs), we should save and reload the dropout scheduler step_num value.