SpFormer: Spatio-Temporal Modeling for Scanpaths with Transformer
DOI:
https://doi.org/10.1609/aaai.v38i7.28593Keywords:
CV: Applications, CV: Representation Learning for VisionAbstract
Saccadic scanpath, a data representation of human visual behavior, has received broad interest in multiple domains. Scanpath is a complex eye-tracking data modality that includes the sequences of fixation positions and fixation duration, coupled with image information. However, previous methods usually face the spatial misalignment problem of fixation features and loss of critical temporal data (including temporal correlation and fixation duration). In this study, we propose a Transformer-based scanpath model, SpFormer, to alleviate these problems. First, we propose a fixation-centric paradigm to extract the aligned spatial fixation features and tokenize the scanpaths. Then, according to the visual working memory mechanism, we design a local meta attention to reduce the semantic redundancy of fixations and guide the model to focus on the meta scanpath. Finally, we progressively integrate the duration information and fuse it with the fixation features to solve the problem of ambiguous location with the Transformer block increasing. We conduct extensive experiments on four databases under three tasks. The SpFormer establishes new state-of-the-art results in distinct settings, verifying its flexibility and versatility in practical applications. The code can be obtained from https://github.com/wenqizhong/SpFormer.Downloads
Published
2024-03-24
How to Cite
Zhong, W., Yu, L., Xia, C., Han, J., & Zhang, D. (2024). SpFormer: Spatio-Temporal Modeling for Scanpaths with Transformer. Proceedings of the AAAI Conference on Artificial Intelligence, 38(7), 7605-7613. https://doi.org/10.1609/aaai.v38i7.28593
Issue
Section
AAAI Technical Track on Computer Vision VI