Skip to content
Snippets Groups Projects

DAN: a Segmentation-free Document Attention Network for Handwritten Document Recognition

Python >= 3.10

For more details about this package, make sure to see the documentation available at https://atr.pages.teklia.com/dan/.

This is an open-source project, licensed using the MIT license.

Inference

To apply DAN to an image, one needs to first add a few imports and to load an image. Note that the image should be in RGB.

import cv2
from dan.ocr.predict.inference import DAN

image = cv2.cvtColor(cv2.imread(IMAGE_PATH), cv2.COLOR_BGR2RGB)

Then one can initialize and load the trained model with the parameters used during training. The directory passed as parameter should have:

  • a model.pt file,
  • a charset.pkl file,
  • a parameters.yml file corresponding to the inference_parameters.yml file generated during training.
model_path = "models"

model = DAN("cpu")
model.load(model_path, mode="eval")

To run the inference on a GPU, one can replace cpu by the name of the GPU. In the end, one can run the prediction:

text, confidence_scores = model.predict(image, confidences=True)

Training

This package provides three subcommands. To get more information about any subcommand, use the --help option.

Get started

See the dedicated page on the official DAN documentation.

Data extraction from Arkindex

See the dedicated page on the official DAN documentation.

Model training

See the dedicated page on the official DAN documentation.

Model prediction

See the dedicated page on the official DAN documentation.