Contrastive Credibility Propagation for Reliable Semi-supervised Learning
DOI:
https://doi.org/10.1609/aaai.v38i19.30124Keywords:
GeneralAbstract
Producing labels for unlabeled data is error-prone, making semi-supervised learning (SSL) troublesome. Often, little is known about when and why an algorithm fails to outperform a supervised baseline. Using benchmark datasets, we craft five common real-world SSL data scenarios: few-label, open-set, noisy-label, and class distribution imbalance/misalignment in the labeled and unlabeled sets. We propose a novel algorithm called Contrastive Credibility Propagation (CCP) for deep SSL via iterative transductive pseudo-label refinement. CCP unifies semi-supervised learning and noisy label learning for the goal of reliably outperforming a supervised baseline in any data scenario. Compared to prior methods which focus on a subset of scenarios, CCP uniquely outperforms the supervised baseline in all scenarios, supporting practitioners when the qualities of labeled or unlabeled data are unknown.Downloads
Published
2024-03-24
How to Cite
Kutt, B., Ramteke, P., Mignot, X., Toman, P., Ramanan, N., Rokka Chhetri, S., Huang, S., Du, M., & Hewlett, W. (2024). Contrastive Credibility Propagation for Reliable Semi-supervised Learning. Proceedings of the AAAI Conference on Artificial Intelligence, 38(19), 21294-21303. https://doi.org/10.1609/aaai.v38i19.30124
Issue
Section
AAAI Technical Track on Safe, Robust and Responsible AI Track