From a41610404d8457d49fe85df2176a3dfb59e85746 Mon Sep 17 00:00:00 2001 From: manonBlanco <blanco@teklia.com> Date: Wed, 17 Jan 2024 16:23:01 +0100 Subject: [PATCH] Move the "data" section --- docs/usage/train/config.md | 24 ++++++++++++------------ 1 file changed, 12 insertions(+), 12 deletions(-) diff --git a/docs/usage/train/config.md b/docs/usage/train/config.md index 444635a9..70938420 100644 --- a/docs/usage/train/config.md +++ b/docs/usage/train/config.md @@ -80,16 +80,6 @@ folder/ | `training.load_epoch` | Model to load. Should be either `"best"` (evaluation) or `last` (training). | `str` | `"last"` | | `training.lr_schedulers` | Learning rate schedulers. | custom class | | -### Data - -| Name | Description | Type | Default | -| ------------------------------ | ---------------------------------------------------------- | ------ | ---------------------------------------------------- | -| `training.data.batch_size` | Mini-batch size for the training loop. | `int` | `2` | -| `training.data.load_in_memory` | Load all images in CPU memory. | `bool` | `True` | -| `training.data.worker_per_gpu` | Number of parallel processes per gpu for data loading. | `int` | `4` | -| `training.data.preprocessings` | List of pre-processing functions to apply to input images. | `list` | (see [dedicated section](#data-preprocessing)) | -| `training.data.augmentation` | Whether to use data augmentation on the training set. | `bool` | `True` (see [dedicated section](#data-augmentation)) | - ### Device | Name | Description | Type | Default | @@ -141,7 +131,17 @@ folder/ | `training.transfer_learning.encoder` | Model to load for the encoder \[state_dict_name, checkpoint_path, learnable, strict\]. | `list` | `["encoder", "pretrained_models/dan_rimes_page.pt", True, True]` | | `training.transfer_learning.decoder` | Model to load for the decoder \[state_dict_name, checkpoint_path, learnable, strict\]. | `list` | `["decoder", "pretrained_models/dan_rimes_page.pt", True, False]` | -### Data preprocessing +### Data + +| Name | Description | Type | Default | +| ------------------------------ | ---------------------------------------------------------- | ------ | ----------------------------------------------- | +| `training.data.batch_size` | Mini-batch size for the training loop. | `int` | `2` | +| `training.data.load_in_memory` | Load all images in CPU memory. | `bool` | `True` | +| `training.data.worker_per_gpu` | Number of parallel processes per gpu for data loading. | `int` | `4` | +| `training.data.preprocessings` | List of pre-processing functions to apply to input images. | `list` | (see [dedicated section](#preprocessing)) | +| `training.data.augmentation` | Whether to use data augmentation on the training set. | `bool` | `True` (see [dedicated section](#augmentation)) | + +#### Preprocessing Preprocessing is applied before training the network (see the [dedicated references](../../ref/ocr/managers/dataset.md)). The list of accepted transforms is defined in the [dedicated references](../../ref/ocr/transforms.md#dan.ocr.transforms.Preprocessing). @@ -196,7 +196,7 @@ Usage: ] ``` -### Data augmentation +#### Augmentation Augmentation transformations are applied on-the-fly during training to artificially increase data variability. -- GitLab