Robust multi-label learning for weakly labeled data

More Info
expand_more

Abstract

Multi-label learning is one of the hot problems in the field of machine learning. The deep neural networks used to solve it could be quite complex and have a huge capacity. This enormous capacity, however, could also be a negative, as they tend to eventually overfit the undesirable features of the data. One such feature presented in the real-world datasets is imperfect labels. A particularly common type of label imperfection is called weak labels. This corruption is characterized not only by the presence of all relevant labels but also by the addition of some irrelevant ones. In this paper, a novel method, Co-ASL, is introduced to deal with the label noise in multi-label datasets. It combines the state-of-the-art approach for multi-label learning, ASL, with the famous robust training strategy, Co-teaching. The performance of the method is then evaluated on noisy versions of MS-COCO to show the lack of overfitting and the performance improvement over the non-robust multi-label ASL.

Files