default search action
Chao Fan 0001
Person information
- affiliation: Southern University of Science and Technology, Research Institute of Trustworthy Autonomous System, China
Other persons with the same name
SPARQL queries
Refine list
refinements active!
zoomed in on ?? of ?? records
view refined list in
export refined list as
2020 – today
- 2024
- [j4]Shinan Zou, Jianbo Xiong, Chao Fan, Chuanfu Shen, Shiqi Yu, Jin Tang:
A Multi-Stage Adaptive Feature Fusion Neural Network for Multimodal Gait Recognition. IEEE Trans. Biom. Behav. Identity Sci. 6(4): 539-549 (2024) - [c11]Chao Fan, Jingzhe Ma, Dongyang Jin, Chuanfu Shen, Shiqi Yu:
SkeletonGait: Gait Recognition Using Skeleton Maps. AAAI 2024: 1662-1669 - [c10]Shinan Zou, Chao Fan, Jianbo Xiong, Chuanfu Shen, Shiqi Yu, Jin Tang:
Cross-Covariate Gait Recognition: A Benchmark. AAAI 2024: 7855-7863 - [c9]Dingqiang Ye, Chao Fan, Jingzhe Ma, Xiaoming Liu, Shiqi Yu:
BigGait: Learning Gait Representation You Want by Large Vision Models. CVPR 2024: 200-210 - [c8]Zirui Zhou, Junhao Liang, Zizhao Peng, Chao Fan, Fengwei An, Shiqi Yu:
Gait Patterns as Biomarkers: A Video-Based Approach for Classifying Scoliosis. MICCAI (5) 2024: 284-294 - [i13]Dingqiang Ye, Chao Fan, Jingzhe Ma, Xiaoming Liu, Shiqi Yu:
BigGait: Learning Gait Representation You Want by Large Vision Models. CoRR abs/2402.19122 (2024) - [i12]Chao Fan, Saihui Hou, Junhao Liang, Chuanfu Shen, Jingzhe Ma, Dongyang Jin, Yongzhen Huang, Shiqi Yu:
OpenGait: A Comprehensive Benchmark Study for Gait Recognition towards Better Practicality. CoRR abs/2405.09138 (2024) - [i11]Zirui Zhou, Junhao Liang, Zizhao Peng, Chao Fan, Fengwei An, Shiqi Yu:
Gait Patterns as Biomarkers: A Video-Based Approach for Classifying Scoliosis. CoRR abs/2407.05726 (2024) - [i10]Chao Fan, Hongyuan Yu, Luo Wang, Yan Huang, Liang Wang, Xibin Jia:
SliceMamba for Medical Image Segmentation. CoRR abs/2407.08481 (2024) - 2023
- [j3]Chao Fan, Saihui Hou, Jilong Wang, Yongzhen Huang, Shiqi Yu:
Learning Gait Representation From Massive Unlabelled Walking Videos: A Benchmark. IEEE Trans. Pattern Anal. Mach. Intell. 45(12): 14920-14937 (2023) - [j2]Saihui Hou, Chao Fan, Chunshui Cao, Xu Liu, Yongzhen Huang:
A Comprehensive Study on the Evaluation of Silhouette-Based Gait Recognition. IEEE Trans. Biom. Behav. Identity Sci. 5(2): 196-208 (2023) - [j1]Chao Fan, Hongyuan Yu, Yan Huang, Caifeng Shan, Liang Wang, Chenglong Li:
SiamON: Siamese Occlusion-Aware Network for Visual Tracking. IEEE Trans. Circuits Syst. Video Technol. 33(1): 186-199 (2023) - [c7]Chuanfu Shen, Fan Chao, Wei Wu, Rui Wang, George Q. Huang, Shiqi Yu:
LidarGait: Benchmarking 3D Gait Recognition with Point Clouds. CVPR 2023: 1054-1063 - [c6]Chao Fan, Junhao Liang, Chuanfu Shen, Saihui Hou, Yongzhen Huang, Shiqi Yu:
OpenGait: Revisiting Gait Recognition Toward Better Practicality. CVPR 2023: 9707-9716 - [c5]Rui Wang, Chuanfu Shen, Chao Fan, George Q. Huang, Shiqi Yu:
PointGait: Boosting End-to-End 3D Gait Recognition with Point Clouds via Spatiotemporal Modeling. IJCB 2023: 1-10 - [c4]Shinan Zou, Jianbo Xiong, Chao Fan, Shiqi Yu, Jin Tang:
A Multi-Stage Adaptive Feature Fusion Neural Network for Multimodal Gait Recognition. IJCB 2023: 1-10 - [i9]Chao Fan, Saihui Hou, Yongzhen Huang, Shiqi Yu:
Exploring Deep Models for Practical Gait Recognition. CoRR abs/2303.03301 (2023) - [i8]Dingqiang Ye, Jingzhe Ma, Chao Fan, Shiqi Yu:
GaitEditer: Attribute Editing for Gait Representation Learning. CoRR abs/2303.05076 (2023) - [i7]Chao Fan, Jingzhe Ma, Dongyang Jin, Chuanfu Shen, Shiqi Yu:
SkeletonGait: Gait Recognition Using Skeleton Maps. CoRR abs/2311.13444 (2023) - [i6]Shinan Zou, Chao Fan, Jianbo Xiong, Chuanfu Shen, Shiqi Yu, Jin Tang:
Cross-Covariate Gait Recognition: A Benchmark. CoRR abs/2312.14404 (2023) - [i5]Shinan Zou, Jianbo Xiong, Chao Fan, Shiqi Yu, Jin Tang:
A Multi-Stage Adaptive Feature Fusion Neural Network for Multimodal Gait Recognition. CoRR abs/2312.14410 (2023) - 2022
- [c3]Junhao Liang, Chao Fan, Saihui Hou, Chuanfu Shen, Yongzhen Huang, Shiqi Yu:
GaitEdge: Beyond Plain End-to-End Gait Recognition for Better Practicality. ECCV (5) 2022: 375-390 - [i4]Junhao Liang, Chao Fan, Saihui Hou, Chuanfu Shen, Yongzhen Huang, Shiqi Yu:
GaitEdge: Beyond Plain End-to-end Gait Recognition for Better Practicality. CoRR abs/2203.03972 (2022) - [i3]Chao Fan, Saihui Hou, Jilong Wang, Yongzhen Huang, Shiqi Yu:
Learning Gait Representation from Massive Unlabelled Walking Videos: A Benchmark. CoRR abs/2206.13964 (2022) - [i2]Chao Fan, Junhao Liang, Chuanfu Shen, Saihui Hou, Yongzhen Huang, Shiqi Yu:
OpenGait: Revisiting Gait Recognition Toward Better Practicality. CoRR abs/2211.06597 (2022) - [i1]Chuanfu Shen, Fan Chao, Wei Wu, Rui Wang, George Q. Huang, Shiqi Yu:
LIDAR GAIT: Benchmarking 3D Gait Recognition with Point Clouds. CoRR abs/2211.10598 (2022) - 2020
- [c2]Chao Fan, Yunjie Peng, Chunshui Cao, Xu Liu, Saihui Hou, Jiannan Chi, Yongzhen Huang, Qing Li, Zhiqiang He:
GaitPart: Temporal Part-Based Model for Gait Recognition. CVPR 2020: 14213-14221
2010 – 2019
- 2019
- [c1]Chao Fan, Yulong Wang, Chenglong Li, Jin Tang:
Visual Tracking Via Siamese Network With Global Similarity. ICIP 2019: 3985-3989
Coauthor Index
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.
Unpaywalled article links
Add open access links from to the list of external document links (if available).
Privacy notice: By enabling the option above, your browser will contact the API of unpaywall.org to load hyperlinks to open access articles. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Unpaywall privacy policy.
Archived links via Wayback Machine
For web page which are no longer available, try to retrieve content from the of the Internet Archive (if available).
Privacy notice: By enabling the option above, your browser will contact the API of archive.org to check for archived content of web pages that are no longer available. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Internet Archive privacy policy.
Reference lists
Add a list of references from , , and to record detail pages.
load references from crossref.org and opencitations.net
Privacy notice: By enabling the option above, your browser will contact the APIs of crossref.org, opencitations.net, and semanticscholar.org to load article reference information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Crossref privacy policy and the OpenCitations privacy policy, as well as the AI2 Privacy Policy covering Semantic Scholar.
Citation data
Add a list of citing articles from and to record detail pages.
load citations from opencitations.net
Privacy notice: By enabling the option above, your browser will contact the API of opencitations.net and semanticscholar.org to load citation information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the OpenCitations privacy policy as well as the AI2 Privacy Policy covering Semantic Scholar.
OpenAlex data
Load additional information about publications from .
Privacy notice: By enabling the option above, your browser will contact the API of openalex.org to load additional information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the information given by OpenAlex.
last updated on 2025-01-09 19:36 CET by the dblp team
all metadata released as open data under CC0 1.0 license
see also: Terms of Use | Privacy Policy | Imprint