{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2025,4,4]],"date-time":"2025-04-04T12:11:12Z","timestamp":1743768672375},"reference-count":0,"publisher":"Association for the Advancement of Artificial Intelligence (AAAI)","issue":"02","license":[{"start":{"date-parts":[[2020,4,3]],"date-time":"2020-04-03T00:00:00Z","timestamp":1585872000000},"content-version":"unspecified","delay-in-days":0,"URL":"https:\/\/www.aaai.org"}],"content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":["AAAI"],"abstract":"We present a novel classifier network called STEP, to classify perceived human emotion from gaits, based on a Spatial Temporal Graph Convolutional Network (ST-GCN) architecture. Given an RGB video of an individual walking, our formulation implicitly exploits the gait features to classify the perceived emotion of the human into one of four emotions: happy, sad, angry, or neutral. We train STEP on annotated real-world gait videos, augmented with annotated synthetic gaits generated using a novel generative network called STEP-Gen, built on an ST-GCN based Conditional Variational Autoencoder (CVAE). We incorporate a novel push-pull regularization loss in the CVAE formulation of STEP-Gen to generate realistic gaits and improve the classification accuracy of STEP. We also release a novel dataset (E-Gait), which consists of 4,227 human gaits annotated with perceived emotions along with thousands of synthetic gaits. In practice, STEP can learn the affective features and exhibits classification accuracy of 88% on E-Gait, which is 14\u201330% more accurate over prior methods.<\/jats:p>","DOI":"10.1609\/aaai.v34i02.5490","type":"journal-article","created":{"date-parts":[[2020,6,29]],"date-time":"2020-06-29T17:49:04Z","timestamp":1593452944000},"page":"1342-1350","source":"Crossref","is-referenced-by-count":70,"title":["STEP: Spatial Temporal Graph Convolutional Networks for Emotion Perception from Gaits"],"prefix":"10.1609","volume":"34","author":[{"given":"Uttaran","family":"Bhattacharya","sequence":"first","affiliation":[]},{"given":"Trisha","family":"Mittal","sequence":"additional","affiliation":[]},{"given":"Rohan","family":"Chandra","sequence":"additional","affiliation":[]},{"given":"Tanmay","family":"Randhavane","sequence":"additional","affiliation":[]},{"given":"Aniket","family":"Bera","sequence":"additional","affiliation":[]},{"given":"Dinesh","family":"Manocha","sequence":"additional","affiliation":[]}],"member":"9382","published-online":{"date-parts":[[2020,4,3]]},"container-title":["Proceedings of the AAAI Conference on Artificial Intelligence"],"original-title":[],"link":[{"URL":"https:\/\/ojs.aaai.org\/index.php\/AAAI\/article\/download\/5490\/5346","content-type":"application\/pdf","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/ojs.aaai.org\/index.php\/AAAI\/article\/download\/5490\/5346","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2022,11,3]],"date-time":"2022-11-03T23:45:58Z","timestamp":1667519158000},"score":1,"resource":{"primary":{"URL":"https:\/\/ojs.aaai.org\/index.php\/AAAI\/article\/view\/5490"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2020,4,3]]},"references-count":0,"journal-issue":{"issue":"02","published-online":{"date-parts":[[2020,6,15]]}},"URL":"https:\/\/doi.org\/10.1609\/aaai.v34i02.5490","relation":{},"ISSN":["2374-3468","2159-5399"],"issn-type":[{"value":"2374-3468","type":"electronic"},{"value":"2159-5399","type":"print"}],"subject":[],"published":{"date-parts":[[2020,4,3]]}}}