{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2024,9,2]],"date-time":"2024-09-02T05:49:38Z","timestamp":1725256178601},"reference-count":36,"publisher":"MDPI AG","issue":"18","license":[{"start":{"date-parts":[[2020,9,13]],"date-time":"2020-09-13T00:00:00Z","timestamp":1599955200000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0\/"}],"content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":["Sensors"],"abstract":"This work concludes the first study on mouth-based emotion recognition while adopting a transfer learning approach. Transfer learning results are paramount for mouth-based emotion emotion recognition, because few datasets are available, and most of them include emotional expressions simulated by actors, instead of adopting real-world categorisation. Using transfer learning, we can use fewer training data than training a whole network from scratch, and thus more efficiently fine-tune the network with emotional data and improve the convolutional neural network\u2019s performance accuracy in the desired domain. The proposed approach aims at improving emotion recognition dynamically, taking into account not only new scenarios but also modified situations to the initial training phase, because the image of the mouth can be available even when the whole face is visible only in an unfavourable perspective. Typical applications include automated supervision of bedridden critical patients in a healthcare management environment, and portable applications supporting disabled users having difficulties in seeing or recognising facial emotions. This achievement takes advantage of previous preliminary works on mouth-based emotion recognition using deep-learning, and has the further benefit of having been tested and compared to a set of other networks using an extensive dataset for face-based emotion recognition, well known in the literature. The accuracy of mouth-based emotion recognition was also compared to the corresponding full-face emotion recognition; we found that the loss in accuracy is mostly compensated by consistent performance in the visual emotion recognition domain. We can, therefore, state that our method proves the importance of mouth detection in the complex process of emotion recognition.<\/jats:p>","DOI":"10.3390\/s20185222","type":"journal-article","created":{"date-parts":[[2020,9,14]],"date-time":"2020-09-14T01:11:32Z","timestamp":1600045892000},"page":"5222","source":"Crossref","is-referenced-by-count":17,"title":["Enhancing Mouth-Based Emotion Recognition Using Transfer Learning"],"prefix":"10.3390","volume":"20","author":[{"ORCID":"http:\/\/orcid.org\/0000-0002-2972-7188","authenticated-orcid":false,"given":"Valentina","family":"Franzoni","sequence":"first","affiliation":[{"name":"Department of Mathematics and Computer Science, University of Perugia, 06123 Perugia, Italy"}]},{"ORCID":"http:\/\/orcid.org\/0000-0002-1854-2196","authenticated-orcid":false,"given":"Giulio","family":"Biondi","sequence":"additional","affiliation":[{"name":"Department of Mathematics and Computer Science, University of Florence, 50121 Firenze, Italy"}]},{"ORCID":"http:\/\/orcid.org\/0000-0001-6815-6659","authenticated-orcid":false,"given":"Damiano","family":"Perri","sequence":"additional","affiliation":[{"name":"Department of Mathematics and Computer Science, University of Florence, 50121 Firenze, Italy"}]},{"ORCID":"http:\/\/orcid.org\/0000-0003-4327-520X","authenticated-orcid":false,"given":"Osvaldo","family":"Gervasi","sequence":"additional","affiliation":[{"name":"Department of Mathematics and Computer Science, University of Perugia, 06123 Perugia, Italy"}]}],"member":"1968","published-online":{"date-parts":[[2020,9,13]]},"reference":[{"key":"ref_1","doi-asserted-by":"crossref","first-page":"17","DOI":"10.3233\/WEB-190397","article-title":"Automating facial emotion recognition","volume":"17","author":"Gervasi","year":"2019","journal-title":"Web Intell."},{"key":"ref_2","doi-asserted-by":"crossref","unstructured":"Sagonas, C., Tzimiropoulos, G., Zafeiriou, S., and Pantic, M. (2013, January 23\u201328). A Semi-automatic Methodology for Facial Landmark Annotation. Proceedings of the 2013 IEEE Conference on Computer Vision and Pattern Recognition Workshops, Portland, OR, USA.","DOI":"10.1109\/CVPRW.2013.132"},{"key":"ref_3","doi-asserted-by":"crossref","unstructured":"Kazemi, V., and Sullivan, J. (2014, January 23\u201328). One millisecond face alignment with an ensemble of regression trees. Proceedings of the 2014 IEEE Conference on Computer Vision and Pattern Recognition, Columbus, OH, USA.","DOI":"10.1109\/CVPR.2014.241"},{"key":"ref_4","first-page":"692","article-title":"EmEx, a Tool for Automated Emotive Face Recognition Using Convolutional Neural Networks","volume":"Volume 10406","author":"Riganelli","year":"2017","journal-title":"Lecture Notes in Computer Science, Proceedings of the International Conference on Computational Science and Its Applications, Trieste, Italy, 3\u20136 July 2017"},{"key":"ref_5","first-page":"649","article-title":"An Approach for Improving Automatic Mouth Emotion Recognition","volume":"Volume 11619","author":"Misra","year":"2019","journal-title":"Lecture Notes in Computer Science, Proceedings of the Computational Science and Its Applications\u2014ICCSA 2019, Saint Petersburg, Russia, 1\u20134 July 2019"},{"key":"ref_6","first-page":"450","article-title":"A Method for Predicting Words by Interpreting Labial Movements","volume":"Volume 9787","author":"Gervasi","year":"2016","journal-title":"Lecture Notes in Computer Science, Proceedings of the Computational Science and Its Applications\u2014ICCSA 2016, Beijing, China, 4\u20137 July 2016"},{"key":"ref_7","doi-asserted-by":"crossref","unstructured":"Deng, J., Dong, W., Socher, R., Li, L., Li, K., and Li, F.-F. (2009, January 20\u201325). ImageNet: A large-scale hierarchical image database. Proceedings of the 2009 IEEE Conference on Computer Vision and Pattern Recognition, Miami, FL, USA.","DOI":"10.1109\/CVPR.2009.5206848"},{"key":"ref_8","doi-asserted-by":"crossref","first-page":"3","DOI":"10.1016\/j.imavis.2016.01.002","article-title":"300 Faces In-The-Wild Challenge: Database and results","volume":"47","author":"Sagonas","year":"2016","journal-title":"Image Vis. Comput."},{"key":"ref_9","doi-asserted-by":"crossref","unstructured":"Ekman, P. (1992). An Argument for Basic Emotions. Cogn. Emot.","DOI":"10.1037\/\/0033-295X.99.3.550"},{"key":"ref_10","doi-asserted-by":"crossref","first-page":"529","DOI":"10.1177\/053901882021004003","article-title":"A psychoevolutionary theory of emotions","volume":"21","author":"Plutchik","year":"1982","journal-title":"Soc. Sci. Inf."},{"key":"ref_11","doi-asserted-by":"crossref","unstructured":"Franzoni, V., Milani, A., and Vallverd\u00fa, J. (2017, January 23\u201326). Emotional Affordances in Human-Machine Interactive Planning and Negotiation. Proceedings of the International Conference on Web Intelligence, Leipzig, Germany.","DOI":"10.1145\/3106426.3109421"},{"key":"ref_12","doi-asserted-by":"crossref","unstructured":"Franzoni, V., Milani, A., and Vallverd\u00fa, J. (2019, January 14\u201317). Errors, Biases, and Overconfidence in Artificial Emotional Modeling. Proceedings of the International Conference on Web Intelligence, Thessaloniki, Greece.","DOI":"10.1145\/3358695.3361749"},{"key":"ref_13","doi-asserted-by":"crossref","first-page":"1","DOI":"10.3233\/WEB-190395","article-title":"Emotional machines: The next revolution","volume":"17","author":"Franzoni","year":"2019","journal-title":"Web Intell."},{"key":"ref_14","first-page":"709","article-title":"A Brain Computer Interface for Enhancing the Communication of People with Severe Impairment","volume":"Volume 8584","author":"Murgante","year":"2014","journal-title":"Lecture Notes in Computer Science, Proceedings of the Computational Science and Its Applications\u2014ICCSA 2014, Guimar\u00e3es, Portugal, 30 June\u20133 July 2014"},{"key":"ref_15","doi-asserted-by":"crossref","first-page":"43","DOI":"10.1007\/s10489-015-0695-5","article-title":"Speaky for robots: The development of vocal interfaces for robotic applications","volume":"44","author":"Bastianelli","year":"2016","journal-title":"Appl. Intell."},{"key":"ref_16","doi-asserted-by":"crossref","first-page":"55","DOI":"10.1016\/S1071-5819(03)00052-1","article-title":"Affective Computing: Challenges","volume":"59","author":"Picard","year":"2003","journal-title":"Int. J. Hum. Comput. Stud."},{"key":"ref_17","unstructured":"Cieliebak, M., D\u00fcrr, O., and Uzdilli, F. (2013, January 17\u201318). Potential and limitations of commercial sentiment detection tools. Proceedings of the CEUR Workshop Proceedings, Valencia, Spain."},{"key":"ref_18","first-page":"391","article-title":"Emotion Recognition for Self-aid in Addiction Treatment, Psychotherapy, and Nonviolent Communication","volume":"Volume 11620","author":"Misra","year":"2019","journal-title":"Lecture Notes in Computer Science, Proceedings of the Computational Science and Its Applications\u2014ICCSA 2019, Saint Petersburg, Russia, 1\u20134 July 2019"},{"key":"ref_19","doi-asserted-by":"crossref","first-page":"663","DOI":"10.1007\/s00779-010-0294-8","article-title":"Interactive visual supports for children with autism","volume":"14","author":"Hayes","year":"2010","journal-title":"Pers. Ubiquitous Comput."},{"key":"ref_20","doi-asserted-by":"crossref","first-page":"1175","DOI":"10.1109\/34.954607","article-title":"Toward machine emotional intelligence: Analysis of affective physiological state","volume":"23","author":"Picard","year":"2001","journal-title":"IEEE Trans. Pattern Anal. Mach. Intell."},{"key":"ref_21","unstructured":"Bertola, F., and Patti, V. (2013, January 17\u201318). Emotional responses to artworks in online collections. Proceedings of the CEUR Workshop Proceedings, Valencia, Spain."},{"key":"ref_22","unstructured":"Canossa, A., Badler, J.B., El-Nasr, M.S., and Anderson, E. (2016, January 16). Eliciting Emotions in Design of Games\u2014A Theory Driven Approach. Proceedings of the 4th Workshop on Emotions and Personality in Personalized Systems (EMPIRE), Boston, MA, USA."},{"key":"ref_23","doi-asserted-by":"crossref","first-page":"6","DOI":"10.1109\/MSMC.2017.2664478","article-title":"Cybernetics of the Mind: Learning Individual\u2019s Perceptions Autonomously","volume":"3","author":"Angelov","year":"2017","journal-title":"IEEE Syst. Man Cybern. Mag."},{"key":"ref_24","doi-asserted-by":"crossref","unstructured":"Biondi, G., Franzoni, V., Li, Y., and Milani, A. (2016, January 6\u20139). Web-Based Similarity for Emotion Recognition in Web Objects. Proceedings of the 9th International Conference on Utility and Cloud Computing, Shanghai, China.","DOI":"10.1145\/2996890.3007883"},{"key":"ref_25","unstructured":"Chollet, F. (2020, July 14). Keras. Available online: https:\/\/github.com\/fchollet\/keras."},{"key":"ref_26","unstructured":"Antoniou, A., Storkey, A., and Edwards, H. (2017). Data Augmentation Generative Adversarial Networks. arXiv."},{"key":"ref_27","doi-asserted-by":"crossref","unstructured":"Sagonas, C., Tzimiropoulos, G., Zafeiriou, S., and Pantic, M. (2013, January 2\u20138). 300 Faces in-the-Wild Challenge: The First Facial Landmark Localization Challenge. Proceedings of the 2013 IEEE International Conference on Computer Vision Workshops, Sydney, Australia.","DOI":"10.1109\/ICCVW.2013.59"},{"key":"ref_28","doi-asserted-by":"crossref","first-page":"84","DOI":"10.1145\/3065386","article-title":"ImageNet Classification with Deep Convolutional Neural Networks","volume":"60","author":"Krizhevsky","year":"2017","journal-title":"Commun. ACM"},{"key":"ref_29","first-page":"665","article-title":"Towards a Learning-Based Performance Modeling for Accelerating Deep Neural Networks","volume":"Volume 11619","author":"Misra","year":"2019","journal-title":"Lecture Notes in Computer Science Book, Proceedings of the Computational Science and Its Applications\u2014ICCSA 2019, Saint Petersburg, Russia, 1\u20134 July 2019"},{"key":"ref_30","doi-asserted-by":"crossref","first-page":"1915","DOI":"10.1109\/TPAMI.2012.231","article-title":"Learning Hierarchical Features for Scene Labeling","volume":"35","author":"Farabet","year":"2013","journal-title":"IEEE Trans. Pattern Anal. Mach. Intell."},{"key":"ref_31","unstructured":"Goodfellow, I., Bengio, Y., and Courville, A. (2016). Deep Learning, MIT Press."},{"key":"ref_32","unstructured":"Simonyan, K., and Zisserman, A. (2014). Very Deep Convolutional Networks for Large-Scale Image Recognition. arXiv."},{"key":"ref_33","doi-asserted-by":"crossref","unstructured":"Szegedy, C., Ioffe, S., Vanhoucke, V., and Alemi, A. (2016). Inception-v4, Inception-ResNet and the Impact of Residual Connections on Learning. arXiv.","DOI":"10.1609\/aaai.v31i1.11231"},{"key":"ref_34","doi-asserted-by":"crossref","unstructured":"Szegedy, C., Vanhoucke, V., Ioffe, S., Shlens, J., and Wojna, Z. (2015). Rethinking the Inception Architecture for Computer Vision. arXiv.","DOI":"10.1109\/CVPR.2016.308"},{"key":"ref_35","doi-asserted-by":"crossref","unstructured":"Chollet, F. (2017, January 21\u201326). Xception: Deep Learning with Depthwise Separable Convolutions. Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA.","DOI":"10.1109\/CVPR.2017.195"},{"key":"ref_36","doi-asserted-by":"crossref","first-page":"18","DOI":"10.1109\/TAFFC.2017.2740923","article-title":"AffectNet: A Database for Facial Expression, Valence, and Arousal Computing in the Wild","volume":"10","author":"Mollahosseini","year":"2019","journal-title":"IEEE Trans. Affect. Comput."}],"container-title":["Sensors"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/www.mdpi.com\/1424-8220\/20\/18\/5222\/pdf","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2024,7,3]],"date-time":"2024-07-03T05:09:20Z","timestamp":1719983360000},"score":1,"resource":{"primary":{"URL":"https:\/\/www.mdpi.com\/1424-8220\/20\/18\/5222"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2020,9,13]]},"references-count":36,"journal-issue":{"issue":"18","published-online":{"date-parts":[[2020,9]]}},"alternative-id":["s20185222"],"URL":"https:\/\/doi.org\/10.3390\/s20185222","relation":{"has-preprint":[{"id-type":"doi","id":"10.20944\/preprints202007.0379.v1","asserted-by":"object"}]},"ISSN":["1424-8220"],"issn-type":[{"value":"1424-8220","type":"electronic"}],"subject":[],"published":{"date-parts":[[2020,9,13]]}}}