Scene text detection has attracted increasing concerns with the rapid development of deep neural networks in recent years. However, existing scene text detectors may overfit on the public datasets due to the limited training data, or generate inaccurate localization for arbitrary
...
Scene text detection has attracted increasing concerns with the rapid development of deep neural networks in recent years. However, existing scene text detectors may overfit on the public datasets due to the limited training data, or generate inaccurate localization for arbitrary-shape scene texts. This paper presents an arbitrary-shape scene text detection method that can achieve better generalization ability and more accurate localization. We first propose a Scale-Aware Data Augmentation (SADA) technique to increase the diversity of training samples. SADA considers the scale variations and local visual variations of scene texts, which can effectively relieve the dilemma of limited training data. At the same time, SADA can enrich the training minibatch, which contributes to accelerating the training process. Furthermore, a Shape Similarity Constraint (SSC) technique is exploited to model the global shape structure of arbitrary-shape scene texts and backgrounds from the perspective of the loss function. SSC encourages the segmentation of text or non-text in the candidate boxes to be similar to the corresponding ground truth, which is helpful to localize more accurate boundaries for arbitrary-shape scene texts. Extensive experiments have demonstrated the effectiveness of the proposed techniques, and state-of-the-art performances are achieved over public arbitrary-shape scene text benchmarks (e.g., CTW1500, Total-Text, and ArT).
@en