


default search action
Qingkai Fang
Person information
SPARQL queries 
Refine list

refinements active!
zoomed in on ?? of ?? records
view refined list in
export refined list as
2020 – today
- 2025
- [i16]Shaolei Zhang, Qingkai Fang, Zhe Yang, Yang Feng:
LLaVA-Mini: Efficient Image and Video Large Multimodal Models with One Vision Token. CoRR abs/2501.03895 (2025) - 2024
- [c13]Zhengrui Ma, Qingkai Fang, Shaolei Zhang, Shoutao Guo, Yang Feng, Min Zhang:
A Non-autoregressive Generation Framework for End-to-End Simultaneous Speech-to-Any Translation. ACL (1) 2024: 1557-1575 - [c12]Qingkai Fang, Shaolei Zhang, Zhengrui Ma, Min Zhang, Yang Feng:
Can We Achieve High-quality Direct Speech-to-Speech Translation without Parallel Speech Data? ACL (1) 2024: 7264-7277 - [c11]Shaolei Zhang, Qingkai Fang, Shoutao Guo, Zhengrui Ma, Min Zhang, Yang Feng:
StreamSpeech: Simultaneous Speech-to-Speech Translation with Multi-task Learning. ACL (1) 2024: 8964-8986 - [c10]Qingkai Fang, Zhengrui Ma, Yan Zhou, Min Zhang, Yang Feng:
CTC-based Non-autoregressive Textless Speech-to-Speech Translation. ACL (Findings) 2024: 9155-9161 - [i15]Shaolei Zhang, Qingkai Fang, Shoutao Guo, Zhengrui Ma, Min Zhang, Yang Feng:
StreamSpeech: Simultaneous Speech-to-Speech Translation with Multi-task Learning. CoRR abs/2406.03049 (2024) - [i14]Zhengrui Ma, Qingkai Fang, Shaolei Zhang, Shoutao Guo, Yang Feng, Min Zhang:
A Non-autoregressive Generation Framework for End-to-End Simultaneous Speech-to-Any Translation. CoRR abs/2406.06937 (2024) - [i13]Qingkai Fang, Shaolei Zhang, Zhengrui Ma, Min Zhang, Yang Feng:
Can We Achieve High-quality Direct Speech-to-Speech Translation without Parallel Speech Data? CoRR abs/2406.07289 (2024) - [i12]Qingkai Fang, Zhengrui Ma, Yan Zhou, Min Zhang, Yang Feng:
CTC-based Non-autoregressive Textless Speech-to-Speech Translation. CoRR abs/2406.07330 (2024) - [i11]Qingkai Fang, Shoutao Guo, Yan Zhou, Zhengrui Ma, Shaolei Zhang, Yang Feng:
LLaMA-Omni: Seamless Speech Interaction with Large Language Models. CoRR abs/2409.06666 (2024) - [i10]Shaolei Zhang, Kehao Zhang, Qingkai Fang, Shoutao Guo, Yan Zhou, Xiaodong Liu, Yang Feng:
BayLing 2: A Multilingual Large Language Model with Efficient Language Alignment. CoRR abs/2411.16300 (2024) - 2023
- [c9]Qingkai Fang, Yang Feng:
Back Translation for Speech-to-text Translation Without Transcripts. ACL (1) 2023: 4567-4587 - [c8]Yan Zhou, Qingkai Fang, Yang Feng:
CMOT: Cross-modal Mixup via Optimal Transport for Speech Translation. ACL (1) 2023: 7873-7887 - [c7]Qingkai Fang, Yang Feng:
Understanding and Bridging the Modality Gap for Speech Translation. ACL (1) 2023: 15864-15881 - [c6]Wenyu Guo, Qingkai Fang, Dong Yu, Yang Feng:
Bridging the Gap between Synthetic and Authentic Images for Multimodal Machine Translation. EMNLP 2023: 2863-2874 - [c5]Qingkai Fang, Yan Zhou, Yang Feng:
DASpeech: Directed Acyclic Transformer for Fast and High-quality Speech-to-Speech Translation. NeurIPS 2023 - [i9]Qingkai Fang, Yang Feng:
Understanding and Bridging the Modality Gap for Speech Translation. CoRR abs/2305.08706 (2023) - [i8]Qingkai Fang, Yang Feng:
Back Translation for Speech-to-text Translation Without Transcripts. CoRR abs/2305.08709 (2023) - [i7]Yan Zhou, Qingkai Fang, Yang Feng:
CMOT: Cross-modal Mixup via Optimal Transport for Speech Translation. CoRR abs/2305.14635 (2023) - [i6]Shaolei Zhang, Qingkai Fang, Zhuocheng Zhang, Zhengrui Ma, Yan Zhou, Langlin Huang, Mengyu Bu, Shangtong Gui, Yunji Chen, Xilin Chen, Yang Feng:
BayLing: Bridging Cross-lingual Alignment and Instruction Following through Interactive Translation for Large Language Models. CoRR abs/2306.10968 (2023) - [i5]Qingkai Fang, Yan Zhou, Yang Feng:
DASpeech: Directed Acyclic Transformer for Fast and High-quality Speech-to-Speech Translation. CoRR abs/2310.07403 (2023) - [i4]Wenyu Guo, Qingkai Fang, Dong Yu, Yang Feng:
Bridging the Gap between Synthetic and Authentic Images for Multimodal Machine Translation. CoRR abs/2310.13361 (2023) - 2022
- [c4]Qingkai Fang, Yang Feng:
Neural Machine Translation with Phrase-Level Universal Visual Representations. ACL (1) 2022: 5687-5698 - [c3]Qingkai Fang, Rong Ye, Lei Li, Yang Feng, Mingxuan Wang:
STEMM: Self-learning with Speech-text Manifold Mixup for Speech Translation. ACL (1) 2022: 7050-7062 - [c2]Zhe Yang, Qingkai Fang, Yang Feng:
Low-resource Neural Machine Translation with Cross-modal Alignment. EMNLP 2022: 10134-10146 - [i3]Qingkai Fang, Yang Feng:
Neural Machine Translation with Phrase-Level Universal Visual Representations. CoRR abs/2203.10299 (2022) - [i2]Qingkai Fang, Rong Ye, Lei Li, Yang Feng, Mingxuan Wang:
STEMM: Self-learning with Speech-text Manifold Mixup for Speech Translation. CoRR abs/2203.10426 (2022) - [i1]Zhe Yang, Qingkai Fang, Yang Feng:
Low-resource Neural Machine Translation with Cross-modal Alignment. CoRR abs/2210.06716 (2022) - 2021
- [c1]Zhuoying Wang
, Qingkai Fang
, Yongtao Wang
:
Geometric Object 3D Reconstruction from Single Line Drawing Image Based on a Network for Classification and Sketch Extraction. ICDAR (1) 2021: 598-613
Coauthor Index

manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.
Unpaywalled article links
Add open access links from to the list of external document links (if available).
Privacy notice: By enabling the option above, your browser will contact the API of unpaywall.org to load hyperlinks to open access articles. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Unpaywall privacy policy.
Archived links via Wayback Machine
For web page which are no longer available, try to retrieve content from the of the Internet Archive (if available).
Privacy notice: By enabling the option above, your browser will contact the API of archive.org to check for archived content of web pages that are no longer available. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Internet Archive privacy policy.
Reference lists
Add a list of references from ,
, and
to record detail pages.
load references from crossref.org and opencitations.net
Privacy notice: By enabling the option above, your browser will contact the APIs of crossref.org, opencitations.net, and semanticscholar.org to load article reference information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Crossref privacy policy and the OpenCitations privacy policy, as well as the AI2 Privacy Policy covering Semantic Scholar.
Citation data
Add a list of citing articles from and
to record detail pages.
load citations from opencitations.net
Privacy notice: By enabling the option above, your browser will contact the API of opencitations.net and semanticscholar.org to load citation information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the OpenCitations privacy policy as well as the AI2 Privacy Policy covering Semantic Scholar.
OpenAlex data
Load additional information about publications from .
Privacy notice: By enabling the option above, your browser will contact the API of openalex.org to load additional information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the information given by OpenAlex.
last updated on 2025-02-20 00:41 CET by the dblp team
all metadata released as open data under CC0 1.0 license
see also: Terms of Use | Privacy Policy | Imprint