# Original implementation

The paper is available at https://arxiv.org/abs/2203.12273.

<div class="video-wrapper">
    <iframe width="560" height="315" src="https://www.youtube.com/embed/HrrUsQfW66E" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>
</div>

This model focuses on handwritten text and layout recognition through the use of an end-to-end segmentation-free attention-based network.
DAN was evaluated on two public datasets: RIMES and READ 2016 at single-page and double-page levels.

The following results were published:

|                         | CER (%) | WER (%) | LOER (%) | mAP_cer (%) |
| :---------------------: | ------- | :-----: | :------: | ----------- |
|   RIMES (single page)   | 4.54    |  11.85  |   3.82   | 93.74       |
| READ 2016 (single page) | 3.53    |  13.33  |   5.94   | 92.57       |
| READ 2016 (double page) | 3.69    |  14.20  |   4.60   | 93.92       |


Pretrained model weights are available [here](https://git.litislab.fr/dcoquenet/dan).