Chengwei Qin
Chengwei Qin
Bestätigte E-Mail-Adresse bei - Startseite
Zitiert von
Zitiert von
Is chatgpt a general-purpose natural language processing task solver?
C Qin, A Zhang, Z Zhang, J Chen, M Yasunaga, D Yang
EMNLP 2023, 2023
Is gpt-3 a good data annotator?
B Ding*, C Qin*, L Liu, L Bing, S Joty, B Li
ACL 2023 (*equal contribution), 2022
LFPT5: A Unified Framework for Lifelong Few-shot Language Learning Based on Prompt Tuning of T5
C Qin, S Joty
ICLR 2022, 2022
Verify-and-edit: A knowledge-enhanced chain-of-thought framework
R Zhao, X Li, S Joty, C Qin, L Bing
ACL 2023, 2023
Continual few-shot relation learning via embedding space regularization and data augmentation
C Qin, S Joty
ACL 2022, 2022
Retrieving multimodal information for augmented generation: A survey
R Zhao, H Chen, W Wang, F Jiao, XL Do, C Qin, B Ding, X Guo, M Li, X Li, ...
Findings of EMNLP 2023, 2023
ChatGPT's One-year Anniversary: Are Open-Source Large Language Models Catching up?
H Chen, F Jiao, X Li, C Qin, M Ravaut, R Zhao, C Xiong, S Joty
arXiv preprint arXiv:2311.16989, 2023
Data augmentation using llms: Data perspectives, learning paradigms and challenges
B Ding, C Qin, R Zhao, T Luo, X Li, G Chen, W Xia, J Hu, AT Luu, S Joty
ACL 2024, 2024
Is chatgpt a general-purpose natural language processing task solver? arXiv 2023
C Qin, A Zhang, Z Zhang, J Chen, M Yasunaga, D Yang
arXiv preprint arXiv:2302.06476, 0
In-Context Learning with Iterative Demonstration Selection
C Qin, A Zhang, A Dagar, W Ye
arXiv preprint arXiv:2310.09881, 2023
Chain of LoRA: Efficient Fine-tuning of Language Models via Residual Learning
W Xia, C Qin, E Hazan
arXiv preprint arXiv:2401.04151, 2024
Lifelong Sequence Generation with Dynamic Module Expansion and Adaptation
C Qin, C Chen, S Joty
EMNLP 2023, 2023
Learning to Initialize: Can Meta Learning Improve Cross-task Generalization in Prompt Tuning?
C Qin, S Joty, Q Li, R Zhao
ACL 2023, 2023
VMS: Traffic balancing based on virtual switches in datacenter networks
Z Li, J Bi, Y Zhang, AB Dogar, C Qin
ICNP 2017, 1-10, 2017
How Much are LLMs Contaminated? A Comprehensive Survey and the LLMSanitize Library
M Ravaut, B Ding, F Jiao, H Chen, X Li, R Zhao, C Qin, C Xiong, S Joty
arXiv preprint arXiv:2404.00699, 2024
Is a Large Language Model a Good Annotator for Event Extraction?
R Chen, C Qin, W Jiang, D Choi
Proceedings of the AAAI Conference on Artificial Intelligence 38 (16), 17772 …, 2024
PromptSum: Parameter-Efficient Controllable Abstractive Summarization
M Ravaut, H Chen, R Zhao, C Qin, S Joty, N Chen
arXiv preprint arXiv:2308.03117, 2023
Contrastive Learning with Generated Representations for Inductive Knowledge Graph Embedding
Q Li, S Joty, D Wang, S Feng, Y Zhang, C Qin
Findings of ACL 2023, 2023
Hearing Lips in Noise: Universal Viseme-Phoneme Mapping and Transfer for Robust Audio-Visual Speech Recognition
Y Hu, R Li, C Chen, C Qin, Q Zhu, ES Chng
ACL 2023 (Area Chair Awards), 2023
Learning Planning-based Reasoning by Trajectories Collection and Process Reward Synthesizing
F Jiao, C Qin, Z Liu, NF Chen, S Joty
arXiv preprint arXiv:2402.00658, 2024
Das System kann den Vorgang jetzt nicht ausführen. Versuchen Sie es später erneut.
Artikel 1–20