default search action
AVEC@MM 2018: Seoul, Republic of Korea
- Fabien Ringeval, Björn W. Schuller, Michel F. Valstar, Roddy Cowie, Maja Pantic:
Proceedings of the 2018 on Audio/Visual Emotion Challenge and Workshop, AVEC@MM 2018, Seoul, Republic of Korea, October 22, 2018. ACM 2018, ISBN 978-1-4503-5983-2
Keynote
- Chi-Chun Lee:
Interpersonal Behavior Modeling for Personality, Affect, and Mental States Recognition and Analysis. 1-2
Introduction
- Fabien Ringeval, Björn W. Schuller, Michel F. Valstar, Roddy Cowie, Heysem Kaya, Maximilian Schmitt, Shahin Amiriparian, Nicholas Cummins, Denis Lalanne, Adrien Michaud, Elvan Çiftçi, Hüseyin Güleç, Albert Ali Salah, Maja Pantic:
AVEC 2018 Workshop and Challenge: Bipolar Disorder and Cross-Cultural Affect Recognition. 3-13
Bipolar Disorder Sub-challenge
- Le Yang, Yan Li, Haifeng Chen, Dongmei Jiang, Meshia Cédric Oveneke, Hichem Sahli:
Bipolar Disorder Recognition with Histogram Features of Arousal and Body Gestures. 15-21 - Zhengyin Du, Weixin Li, Di Huang, Yunhong Wang:
Bipolar Disorder Recognition via Multi-scale Discriminative Audio Temporal Representation. 23-30 - Xiaofen Xing, Bolun Cai, Yinhu Zhao, Shuzhen Li, Zhiwei He, Weiquan Fan:
Multi-modality Hierarchical Recall based on GBDTs for Bipolar Disorder Classification. 31-37 - Zafi Sherhan Syed, Kirill A. Sidorov, A. David Marshall:
Automated Screening for Bipolar Disorder from Audio/Visual Modalities. 39-45
Cross-cultural Emotion Sub-challenge
- Kalani Wataraka Gamage, Ting Dang, Vidhyasaharan Sethu, Julien Epps, Eliathamby Ambikairajah:
Speech-based Continuous Emotion Prediction by Learning Perception Responses related to Salient Events: A Study based on Vocal Affect Bursts and Cross-Cultural Affect in AVEC 2018. 47-55 - Jian Huang, Ya Li, Jianhua Tao, Zheng Lian, Mingyue Niu, Minghao Yang:
Multimodal Continuous Emotion Recognition with Data Augmentation Using Recurrent Neural Networks. 57-64 - Jinming Zhao, Ruichen Li, Shizhe Chen, Qin Jin:
Multi-modal Multi-cultural Dimensional Continues Emotion Recognition in Dyadic Interactions. 65-72
Gold-standard Emotion Sub-challenge
- Chen Wang, Phil Lopes, Thierry Pun, Guillaume Chanel:
Towards a Better Gold Standard: Denoising and Modelling Continuous Emotion Annotations Based on Feature Agglomeration and Outlier Regularisation. 73-81 - Brandon M. Booth, Karel Mundnich, Shrikanth S. Narayanan:
Fusing Annotations with Majority Vote Triplet Embeddings. 83-89
Deep Learning for Affective Computing
- Jian Huang, Ya Li, Jianhua Tao, Zheng Lian, Mingyue Niu, Minghao Yang:
Deep Learning for Continuous Multiple Time Series Annotations. 91-98 - Chih-Chuan Lu, Jeng-Lin Li, Chi-Chun Lee:
Learning an Arousal-Valence Speech Front-End Network using Media Data In-the-Wild for Emotion Recognition. 99-105
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.