Diverse Adversaries for Mitigating Bias in Training - ACL Anthology

Diverse Adversaries for Mitigating Bias in Training

Xudong Han, Timothy Baldwin, Trevor Cohn


Abstract
Adversarial learning can learn fairer and less biased models of language processing than standard training. However, current adversarial techniques only partially mitigate the problem of model bias, added to which their training procedures are often unstable. In this paper, we propose a novel approach to adversarial learning based on the use of multiple diverse discriminators, whereby discriminators are encouraged to learn orthogonal hidden representations from one another. Experimental results show that our method substantially improves over standard adversarial removal methods, in terms of reducing bias and stability of training.
Anthology ID:
2021.eacl-main.239
Volume:
Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume
Month:
April
Year:
2021
Address:
Online
Editors:
Paola Merlo, Jorg Tiedemann, Reut Tsarfaty
Venue:
EACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
2760–2765
Language:
URL:
https://aclanthology.org/2021.eacl-main.239
DOI:
10.18653/v1/2021.eacl-main.239
Bibkey:
Cite (ACL):
Xudong Han, Timothy Baldwin, and Trevor Cohn. 2021. Diverse Adversaries for Mitigating Bias in Training. In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, pages 2760–2765, Online. Association for Computational Linguistics.
Cite (Informal):
Diverse Adversaries for Mitigating Bias in Training (Han et al., EACL 2021)
Copy Citation:
PDF:
https://aclanthology.org/2021.eacl-main.239.pdf
Code
 HanXudong/Diverse_Adversaries_for_Mitigating_Bias_in_Training