Multimodal Information Processing: Some recent NLP applications
#IEEEDay
#society
#computer
#technical
#ieeecs
#computing
#industry
#natural
#NLP
Multimodal information processing deals with the efficient usage of information available in different modalities such as audio, video, text, etc. for solving various task applications of real life. This talk will discuss how the multimodal information extracted from different modalities can help in improving different tasks of dialogue systems, summarization, hate speech detection, and complaint mining. Multimodal information collected from audio tones, facial expressions, and texts is utilized for determining the type of utterance in a multitask setting where emotion recognition and dialogue act classification tasks are solved simultaneously. Multimodal information collected from videos, images, and texts can also be utilized for generating a summary. Images and texts collected from Amazon reviews are utilized for developing some aspect-based multimodal complaint detection systems in a multi-task setting where sentiment and emotion information are utilized as auxiliary tasks. Memes collected from social media are utilized for the detection of hate speech in a multitask setting where sentiment, emotion, and sarcasm detection are utilized as auxiliary tasks. This talk will highlight these different applications of multimodal information processing in solving different NLP tasks.
Date and Time
Location
Hosts
Registration
- Date: 03 Oct 2024
- Time: 05:00 PM to 06:00 PM
- All times are (UTC+05:30) Chennai
-
Add Event to Calendar
If you are not a robot, please complete the ReCAPTCHA to display virtual attendance info.
- Contact Event
Hosts
-
Vinit Kumar Gunjan, Ph.D.
Chair IEEE Computer Society and Additional Treasurer 2024
IEEE Hyderabad Section
No: 644-645Al-Karim Trade Center,
Ranigunj,Secunderabad – 500 003.
Telangana. India.
Ph:- +91-9441328438
- Starts 28 September 2024 12:00 AM
- Ends 03 October 2024 02:00 PM
- All times are (UTC+05:30) Chennai
- No Admission Charge
Speakers
Sriparna Saha of Indian Institute of Technology, Patna
Topic:
Multimodal Information Processing: Some recent NLP applications
Multimodal information processing deals with the efficient usage of information available in different modalities such as audio, video, text, etc. for solving various task applications of real life. This talk will discuss how the multimodal information extracted from different modalities can help in improving different tasks of dialogue systems, summarization, hate speech detection, and complaint mining. Multimodal information collected from audio tones, facial expressions, and texts is utilized for determining the type of utterance in a multitask setting where emotion recognition and dialogue act classification tasks are solved simultaneously. Multimodal information collected from videos, images, and texts can also be utilized for generating a summary. Images and texts collected from Amazon reviews are utilized for developing some aspect-based multimodal complaint detection systems in a multi-task setting where sentiment and emotion information are utilized as auxiliary tasks. Memes collected from social media are utilized for the detection of hate speech in a multitask setting where sentiment, emotion, and sarcasm detection are utilized as auxiliary tasks. This talk will highlight these different applications of multimodal information processing in solving different NLP tasks.
Biography:
https://www.iitp.ac.in/~sriparna/About.html
Email:
Address:Department of Computer Science & Engineering, Indian Institute of Technology, Patna, India
Media