Enhance to Read Better: An Improved Generative Adversarial Network for Handwritten Document Image Enhancement

Documents used for handwritten text recognition are often affected by degradation. For instance, historical documents may be affected by corrupted text, dust, or wrinkles. Incorrect scanning processes or watermarks and stamps may also cause problems. Classical image recovery techniques try to reverse the degradation effect. However, the models can deteriorate the text while cleaning the image.

Writing. Image credit: StockSnap via Pixabay, CC0 Public Domain

Therefore, a group of scientists proposes a deep learning model that learns its parameters not only from handwritten images but also from the associated text. It is based on generative adversarial networks (GANs) and has a recognizer that assesses the readability of the recovered image. Experiments with degraded Arabic and Latin documents proved the effectiveness of the proposed model. It is also shown that training the recognizer progressively from the degraded domain to the clean versions improves the recognition performance.

Handwritten document images can be highly affected by degradation for different reasons: Paper ageing, daily-life scenarios (wrinkles, dust, etc.), bad scanning process and so on. These artifacts raise many readability issues for current Handwritten Text Recognition (HTR) algorithms and severely devalue their efficiency. In this paper, we propose an end to end architecture based on Generative Adversarial Networks (GANs) to recover the degraded documents into a clean and readable form. Unlike the most well-known document binarization methods, which try to improve the visual quality of the degraded document, the proposed architecture integrates a handwritten text recognizer that promotes the generated document image to be more readable. To the best of our knowledge, this is the first work to use the text information while binarizing handwritten documents. Extensive experiments conducted on degraded Arabic and Latin handwritten documents demonstrate the usefulness of integrating the recognizer within the GAN architecture, which improves both the visual quality and the readability of the degraded document images. Moreover, we outperform the state of the art in H-DIBCO 2018 challenge, after fine tuning our pre-trained model with synthetically degraded Latin handwritten images, on this task.

Research paper: Khamekhem Jemni, S., Souibgui, M. A., Kessentini, Y., and Fornés, A., "Enhance to Read Better: An Improved Generative Adversarial Network for Handwritten Document Image Enhancement", 2021. Link: https://arxiv.org/abs/2105.12710
 
Awesome last month