# DAN: a Segmentation-free Document Attention Network for Handwritten Document Recognition [](https://www.python.org/downloads/release/python-3100/) For more details about this package, make sure to see the documentation available at <https://atr.pages.teklia.com/dan/>. This is an open-source project, licensed using [the MIT license](https://opensource.org/license/mit/). ## Development For development and tests purpose it may be useful to install the project as a editable package with pip. This package is based on a GitLab package registry containing all the nerval source code. You need [a personal access token](https://docs.gitlab.com/ee/user/profile/personal_access_tokens.html) and access to the [nerval repository](https://gitlab.teklia.com/ner/nerval) in order to install this module. You will need to add the below to your `~/.netrc` file: ```shell machine gitlab.teklia.com login __token__ password <YOUR_PERSONAL_TOKEN> ``` Then you can install the package as a editable package with pip: ```shell pip3 install --index-url https://gitlab.teklia.com/api/v4/projects/210/packages/pypi/simple -e . ``` ### Linter Code syntax is analyzed before submitting the code.\ To run the linter tools suite you may use pre-commit. ```shell pip install pre-commit pre-commit run -a ``` ### Run tests Tests are executed with `tox` using [pytest](https://pytest.org). To install `tox`, ```shell pip install tox tox ``` To reload the test virtual environment you can use `tox -r` Run a single test module: `tox -- <test_path>` Run a single test: `tox -- <test_path>::<test_function>` The tests use a large file stored via [Git-LFS](https://docs.gitlab.com/ee/topics/git/lfs/). Make sure to run `git-lfs pull` before running them. ### Update documentation Please keep the documentation updated when modifying or adding features. It's pretty easy to do: ```shell pip install -r doc-requirements.txt mkdocs serve ``` You can then write in Markdown in the relevant `docs/*.md` files, and see live output on <http://localhost:8000>. ## Inference To apply DAN to an image, one needs to first add a few imports and to load an image. Note that the image should be in RGB. ```python import cv2 from dan.ocr.predict.inference import DAN image = cv2.cvtColor(cv2.imread(IMAGE_PATH), cv2.COLOR_BGR2RGB) ``` Then one can initialize and load the trained model with the parameters used during training. The directory passed as parameter should have: - a `model.pt` file, - a `charset.pkl` file, - a `parameters.yml` file corresponding to the `inference_parameters.yml` file generated during training. ```python model_path = "models" model = DAN("cpu") model.load(model_path, mode="eval") ``` To run the inference on a GPU, one can replace `cpu` by the name of the GPU. In the end, one can run the prediction: ```python text, confidence_scores = model.predict(image, confidences=True) ``` ## Training This package provides three subcommands. To get more information about any subcommand, use the `--help` option. ### Get started See the [dedicated page](https://atr.pages.teklia.com/dan/get_started/training/) on the official DAN documentation. ### Data extraction from Arkindex See the [dedicated page](https://atr.pages.teklia.com/dan/usage/datasets/extract/) on the official DAN documentation. ### Model training See the [dedicated page](https://atr.pages.teklia.com/dan/usage/train/) on the official DAN documentation. ### Model prediction See the [dedicated page](https://atr.pages.teklia.com/dan/usage/predict/) on the official DAN documentation.