{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2024,8,5]],"date-time":"2024-08-05T11:17:29Z","timestamp":1722856649197},"reference-count":32,"publisher":"Wiley","issue":"3-4","license":[{"start":{"date-parts":[[2023,5,9]],"date-time":"2023-05-09T00:00:00Z","timestamp":1683590400000},"content-version":"vor","delay-in-days":8,"URL":"http:\/\/onlinelibrary.wiley.com\/termsAndConditions#vor"}],"funder":[{"DOI":"10.13039\/501100004377","name":"Hong Kong Polytechnic University","doi-asserted-by":"publisher","award":["P0030419","P0042740","P0043906","P0044520","P0035358"],"id":[{"id":"10.13039\/501100004377","id-type":"DOI","asserted-by":"publisher"}]}],"content-domain":{"domain":["onlinelibrary.wiley.com"],"crossmark-restriction":true},"short-container-title":["Computer Animation & Virtual"],"published-print":{"date-parts":[[2023,5]]},"abstract":"Abstract<\/jats:title>Face recognition (FR) systems based on convolutional neural networks have shown excellent performance in human face inference. However, some malicious users may exploit such powerful systems to identify others' face images disclosed by victims' social network accounts, consequently obtaining private information. To address this emerging issue, synthesizing face protection images with visual and protective effects is essential. However, existing face protection methods encounter three critical problems: poor visual effect, limited protective effect, and trade\u2010off between visual and protective effects. To address these challenges, we propose a novel face protection approach in this article. Specifically, we design a generative adversarial network (GAN) framework with an autoencoder (AEGAN) as the generator to synthesize the protection images. It is worth noting that we introduce an interpolation upsampling module in the decoder in order to let the synthesized protection images evade recognition by powerful convolution\u2010based FR systems. Furthermore, we introduce an attention module with a perceptual loss in AEGAN to enhance the visual effects of synthesized images by AEGAN. Extensive experiments have shown that AEGAN not only can maintain the comfortable visual quality of synthesized images but also prevent the recognition of commercial FR systems, including Baidu and iKLYTEK.<\/jats:p>","DOI":"10.1002\/cav.2160","type":"journal-article","created":{"date-parts":[[2023,5,10]],"date-time":"2023-05-10T22:54:40Z","timestamp":1683759280000},"update-policy":"http:\/\/dx.doi.org\/10.1002\/crossmark_policy","source":"Crossref","is-referenced-by-count":2,"title":["AEGAN: Generating imperceptible face synthesis via autoencoder\u2010based generative adversarial network"],"prefix":"10.1002","volume":"34","author":[{"given":"Aolin","family":"Che","sequence":"first","affiliation":[{"name":"Faculty of Innovation Engineering Macau University of Science and Technology Macau"}]},{"given":"Jing\u2010Hua","family":"Yang","sequence":"additional","affiliation":[{"name":"Faculty of Innovation Engineering Macau University of Science and Technology Macau"}]},{"ORCID":"http:\/\/orcid.org\/0000-0001-7524-2272","authenticated-orcid":false,"given":"Cai","family":"Guo","sequence":"additional","affiliation":[{"name":"School of Computing and Information Engineering Hanshan Normal University Chaozhou China"}]},{"given":"Hong\u2010Ning","family":"Dai","sequence":"additional","affiliation":[{"name":"Department of Computer Science Hong Kong Baptist University Hong Kong"}]},{"given":"Haoran","family":"Xie","sequence":"additional","affiliation":[{"name":"Department of Computing and Decision Sciences Lingnan University Hong Kong"}]},{"ORCID":"http:\/\/orcid.org\/0000-0002-1503-0240","authenticated-orcid":false,"given":"Ping","family":"Li","sequence":"additional","affiliation":[{"name":"Department of Computing The Hong Kong Polytechnic University Hong Kong"},{"name":"School of Design The Hong Kong Polytechnic University Hong Kong"}]}],"member":"311","published-online":{"date-parts":[[2023,5,9]]},"reference":[{"key":"e_1_2_7_2_1","doi-asserted-by":"crossref","unstructured":"YouQ BhatiaS SunT LuoJ.The eyes of the beholder: gender prediction using images posted in online social networks. Proceedings of the IEEE International Conference on Data Mining Workshop; 2014. p. 1026\u201330.","DOI":"10.1109\/ICDMW.2014.93"},{"key":"e_1_2_7_3_1","doi-asserted-by":"crossref","unstructured":"AlipourB ImineA RusinowitchM.Gender inference for Facebook picture owners. Proceedings of the International Conference on Trust and Privacy in Digital Business; 2019. p. 145\u201360.","DOI":"10.1007\/978-3-030-27813-7_10"},{"key":"e_1_2_7_4_1","unstructured":"JungSG AnJ KwakH SalminenJ JansenBJ.Inferring social media users' demographics from profile pictures: a face++ analysis on twitter users. Proceedings of the International Conference on Electronic Business; 2017. p. 140\u20135."},{"key":"e_1_2_7_5_1","doi-asserted-by":"crossref","unstructured":"WangR CampbellAT ZhouX.Using opportunistic face logging from smartphone to infer mental health: challenges and future directions. Proceedings of the ACM International Joint Conference on Pervasive and Ubiquitous Computing and Proceedings of the ACM International Symposium on Wearable Computers; 2015. p. 683\u201392.","DOI":"10.1145\/2800835.2804391"},{"key":"e_1_2_7_6_1","doi-asserted-by":"crossref","unstructured":"JungSG AnJ KwakH SalminenJ JansenBJ.Assessing the accuracy of four popular face recognition tools for inferring gender age and race. Proceedings of the International AAAI Conference on Web and Social Media; 2018. p. 624\u20137.","DOI":"10.1609\/icwsm.v12i1.15058"},{"key":"e_1_2_7_7_1","unstructured":"WuX ZhangX.Responses to critiques on machine learning of criminality perceptions. arXiv preprint arXiv:161104135 2016."},{"key":"e_1_2_7_8_1","doi-asserted-by":"publisher","DOI":"10.1037\/pspa0000098"},{"key":"e_1_2_7_9_1","doi-asserted-by":"publisher","DOI":"10.1016\/j.image.2019.115699"},{"key":"e_1_2_7_10_1","doi-asserted-by":"publisher","DOI":"10.3390\/jcp1030024"},{"key":"e_1_2_7_11_1","doi-asserted-by":"crossref","unstructured":"HuS LiuX ZhangY LiM ZhangLY JinH et al.Protecting facial privacy: generating adversarial identity masks via style\u2010robust makeup transfer. Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition; 2022. p. 15014\u201323.","DOI":"10.1109\/CVPR52688.2022.01459"},{"key":"e_1_2_7_12_1","doi-asserted-by":"crossref","unstructured":"YangX DongY PangT SuH ZhuJ ChenY et al.Towards face encryption by generating adversarial identity masks. Proceedings of the IEEE\/CVF International Conference on Computer Vision; 2021. p. 3897\u2013907.","DOI":"10.1109\/ICCV48922.2021.00387"},{"key":"e_1_2_7_13_1","doi-asserted-by":"crossref","unstructured":"NetoPC SequeiraAF CardosoJS.Myope models\u2010are face presentation attack detection models short\u2010sighted? Proceedings of the IEEE\/CVF Winter Conference on Applications of Computer Vision; 2022. p. 390\u20139.","DOI":"10.1109\/WACVW54805.2022.00045"},{"key":"e_1_2_7_14_1","doi-asserted-by":"crossref","unstructured":"GafniO WolfL TaigmanY.Live face de\u2010identification in video. Proceeding of the IEEE\/CVF International Conference on Computer Vision; 2019. p. 9378\u201387.","DOI":"10.1109\/ICCV.2019.00947"},{"key":"e_1_2_7_15_1","doi-asserted-by":"crossref","unstructured":"DebD ZhangJ JainAK.AdvFaces: adversarial face synthesis. Proceedings of the International Joint Conference on Biometrics; 2020. p. 1\u201310.","DOI":"10.1109\/IJCB48548.2020.9304898"},{"key":"e_1_2_7_16_1","doi-asserted-by":"publisher","DOI":"10.1109\/TIFS.2020.3036801"},{"key":"e_1_2_7_17_1","doi-asserted-by":"crossref","unstructured":"MirjaliliV RaschkaS NamboodiriA RossA.Semi\u2010adversarial networks: convolutional autoencoders for imparting privacy to face images. Proceedings of the International Conference on Biometrics; 2018. p. 82\u20139.","DOI":"10.1109\/ICB2018.2018.00023"},{"key":"e_1_2_7_18_1","first-page":"1","article-title":"Enhanced embedded autoencoders: an attribute\u2010preserving face de\u2010identification framework","author":"Liu J","year":"2023","journal-title":"IEEE Internet Things J"},{"key":"e_1_2_7_19_1","unstructured":"GuoMH LuCZ LiuZN ChengMM HuSM.Visual attention network. arXiv preprint arXiv:220209741 2022."},{"key":"e_1_2_7_20_1","doi-asserted-by":"crossref","unstructured":"ZhuX ChengD ZhangZ LinS DaiJ.An empirical study of spatial attention mechanisms in deep networks. Proceeding of the IEEE\/CVF International Conference on Computer Vision; 2019. p. 6688\u201397.","DOI":"10.1109\/ICCV.2019.00679"},{"key":"e_1_2_7_21_1","doi-asserted-by":"publisher","DOI":"10.1016\/j.neunet.2020.09.001"},{"key":"e_1_2_7_22_1","unstructured":"VaswaniA ShazeerN ParmarN UszkoreitJ JonesL GomezAN et al.Attention is all you need. Proceedings of the 31st Conference on Advances in Neural Information Processing Systems; 2017. p. 1\u201311."},{"key":"e_1_2_7_23_1","doi-asserted-by":"crossref","unstructured":"GatysLA EckerAS BethgeM.Image style transfer using convolutional neural networks. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR); 2016. p. 2414\u201323.","DOI":"10.1109\/CVPR.2016.265"},{"key":"e_1_2_7_24_1","doi-asserted-by":"publisher","DOI":"10.1109\/TVCG.2019.2921336"},{"key":"e_1_2_7_25_1","doi-asserted-by":"crossref","unstructured":"JohnsonJ AlahiA Fei\u2010FeiL.Perceptual losses for real\u2010time style transfer and super\u2010resolution. Proceedings of the European Conference on Computer Vision; 2016. p. 694\u2013711.","DOI":"10.1007\/978-3-319-46475-6_43"},{"key":"e_1_2_7_26_1","doi-asserted-by":"publisher","DOI":"10.1109\/TIP.2019.2895768"},{"key":"e_1_2_7_27_1","doi-asserted-by":"crossref","unstructured":"IsolaP ZhuJY ZhouT EfrosAA.Image\u2010to\u2010image translation with conditional adversarial networks. Proceeding of the IEEE Conference on Computer Vision and Pattern Recognition; 2017. p. 1125\u201334.","DOI":"10.1109\/CVPR.2017.632"},{"key":"e_1_2_7_28_1","unstructured":"SimonyanK ZissermanA.Very deep convolutional networks for large\u2010scale image recognition. arXiv preprint arXiv:14091556 2014."},{"key":"e_1_2_7_29_1","unstructured":"HuangGB MattarM BergT Learned\u2010MillerE.Labeled faces in the wild: a database for studying face recognition in unconstrained environments. Proceedings of the Workshop on faces in \u2018Real\u2010Life\u2019 images: detection alignment and recognition; 2008. p. 1\u201314."},{"key":"e_1_2_7_30_1","doi-asserted-by":"publisher","DOI":"10.5244\/C.29.41"},{"key":"e_1_2_7_31_1","doi-asserted-by":"crossref","unstructured":"SchroffF KalenichenkoD PhilbinJ.FaceNet: a unified embedding for face recognition and clustering. Proceeding of the IEEE Conference on Computer Vision and Pattern Recognition; 2015. p. 815\u201323.","DOI":"10.1109\/CVPR.2015.7298682"},{"key":"e_1_2_7_32_1","doi-asserted-by":"crossref","unstructured":"DengJ GuoJ XueN ZafeiriouS.ArcFace: additive angular margin loss for deep face recognition. Proceeding of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition; 2019. p. 4690\u20139.","DOI":"10.1109\/CVPR.2019.00482"},{"key":"e_1_2_7_33_1","doi-asserted-by":"crossref","unstructured":"KomkovS PetiushkoA.AdvHat: real\u2010world adversarial attack on arcface face ID system. Proceedings of the International Conference on Pattern Recognition; 2021. p. 819\u201326.","DOI":"10.1109\/ICPR48806.2021.9412236"}],"container-title":["Computer Animation and Virtual Worlds"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/onlinelibrary.wiley.com\/doi\/pdf\/10.1002\/cav.2160","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2023,8,20]],"date-time":"2023-08-20T09:33:42Z","timestamp":1692524022000},"score":1,"resource":{"primary":{"URL":"https:\/\/onlinelibrary.wiley.com\/doi\/10.1002\/cav.2160"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2023,5]]},"references-count":32,"journal-issue":{"issue":"3-4","published-print":{"date-parts":[[2023,5]]}},"alternative-id":["10.1002\/cav.2160"],"URL":"https:\/\/doi.org\/10.1002\/cav.2160","archive":["Portico"],"relation":{},"ISSN":["1546-4261","1546-427X"],"issn-type":[{"value":"1546-4261","type":"print"},{"value":"1546-427X","type":"electronic"}],"subject":[],"published":{"date-parts":[[2023,5]]},"assertion":[{"value":"2023-04-21","order":0,"name":"received","label":"Received","group":{"name":"publication_history","label":"Publication History"}},{"value":"2023-05-01","order":1,"name":"accepted","label":"Accepted","group":{"name":"publication_history","label":"Publication History"}},{"value":"2023-05-09","order":2,"name":"published","label":"Published","group":{"name":"publication_history","label":"Publication History"}}]}}