{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2024,7,16]],"date-time":"2024-07-16T16:33:23Z","timestamp":1721147603740},"reference-count":0,"publisher":"Association for the Advancement of Artificial Intelligence (AAAI)","issue":"1","content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":["HCOMP"],"abstract":"Model explanations are generated by XAI (explainable AI) methods to help people understand and interpret machine learning models. To study XAI methods from the human perspective, we propose a human-based benchmark dataset, i.e., human saliency benchmark (HSB), for evaluating saliency-based XAI methods. Different from existing human saliency annotations where class-related features are manually and subjectively labeled, this benchmark collects more objective human attention on vision information with a precise eye-tracking device and a novel crowdsourcing experiment. Taking the labor cost of human experiment into consideration, we further explore the potential of utilizing a prediction model trained on HSB to mimic saliency annotating by humans. Hence, a dense prediction problem is formulated, and we propose an encoder-decoder architecture which combines multi-modal and multi-scale features to produce the human saliency maps. Accordingly, a pretraining-finetuning method is designed to address the model training problem. Finally, we arrive at a model trained on HSB named human saliency imitator (HSI). We show, through an extensive evaluation, that HSI can successfully predict human saliency on our HSB dataset, and the HSI-generated human saliency dataset on ImageNet showcases the ability of benchmarking XAI methods both qualitatively and quantitatively.<\/jats:p>","DOI":"10.1609\/hcomp.v10i1.22002","type":"journal-article","created":{"date-parts":[[2022,10,18]],"date-time":"2022-10-18T10:06:34Z","timestamp":1666087594000},"page":"231-242","source":"Crossref","is-referenced-by-count":4,"title":["HSI: Human Saliency Imitator for Benchmarking Saliency-Based Model Explanations"],"prefix":"10.1609","volume":"10","author":[{"given":"Yi","family":"Yang","sequence":"first","affiliation":[]},{"given":"Yueyuan","family":"Zheng","sequence":"additional","affiliation":[]},{"given":"Didan","family":"Deng","sequence":"additional","affiliation":[]},{"given":"Jindi","family":"Zhang","sequence":"additional","affiliation":[]},{"given":"Yongxiang","family":"Huang","sequence":"additional","affiliation":[]},{"given":"Yumeng","family":"Yang","sequence":"additional","affiliation":[]},{"given":"Janet H.","family":"Hsiao","sequence":"additional","affiliation":[]},{"given":"Caleb Chen","family":"Cao","sequence":"additional","affiliation":[]}],"member":"9382","published-online":{"date-parts":[[2022,10,14]]},"container-title":["Proceedings of the AAAI Conference on Human Computation and Crowdsourcing"],"original-title":[],"link":[{"URL":"https:\/\/ojs.aaai.org\/index.php\/HCOMP\/article\/download\/22002\/21778","content-type":"application\/pdf","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/ojs.aaai.org\/index.php\/HCOMP\/article\/download\/22002\/21778","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2022,10,18]],"date-time":"2022-10-18T10:06:35Z","timestamp":1666087595000},"score":1,"resource":{"primary":{"URL":"https:\/\/ojs.aaai.org\/index.php\/HCOMP\/article\/view\/22002"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2022,10,14]]},"references-count":0,"journal-issue":{"issue":"1","published-online":{"date-parts":[[2022,10,14]]}},"URL":"https:\/\/doi.org\/10.1609\/hcomp.v10i1.22002","relation":{},"ISSN":["2769-1349","2769-1330"],"issn-type":[{"value":"2769-1349","type":"electronic"},{"value":"2769-1330","type":"print"}],"subject":[],"published":{"date-parts":[[2022,10,14]]}}}