Follow
Yujia Qin
Yujia Qin
Verified email at mails.tsinghua.edu.cn - Homepage
Title
Cited by
Cited by
Year
CPM: A large-scale generative Chinese pre-trained language model
Z Zhang, X Han, H Zhou, P Ke, Y Gu, D Ye, Y Qin, Y Su, H Ji, J Guan, F Qi, ...
AI Open 2, 93-99, 2021
442021
ERICA: Improving Entity and Relation Understanding for Pre-trained Language Models via Contrastive Learning
Y Qin, Y Lin, R Takanobu, Z Liu, P Li, H Ji, M Huang, M Sun, J Zhou
ACL 2021, 2020
422020
Learning from Explanations with Neural Execution Tree
Z Wang, Y Qin, W Zhou, J Yan, Q Ye, L Neves, Z Liu, X Ren
ICLR 2020, 2019
272019
Improving sequence modeling ability of recurrent neural networks via sememes
Y Qin, F Qi, S Ouyang, Z Liu, C Yang, Y Wang, Q Liu, M Sun
IEEE/ACM Transactions on Audio, Speech, and Language Processing 28, 2364-2373, 2020
132020
Delta tuning: A comprehensive study of parameter efficient methods for pre-trained language models
N Ding, Y Qin, G Yang, F Wei, Z Yang, Y Su, S Hu, Y Chen, CM Chan, ...
arXiv preprint arXiv:2203.06904, 2022
82022
On Transferability of Prompt Tuning for Natural Language Understanding
Y Su, X Wang, Y Qin, CM Chan, Y Lin, Z Liu, P Li, J Li, L Hou, M Sun, ...
NAACL 2022, 2021
72021
Knowledge inheritance for pre-trained language models
Y Qin, Y Lin, J Yi, J Zhang, X Han, Z Zhang, Y Su, Z Liu, P Li, M Sun, ...
NAACL 2022, 2021
72021
Exploring low-dimensional intrinsic task subspace via prompt tuning
Y Qin, X Wang, Y Su, Y Lin, N Ding, Z Liu, J Li, L Hou, P Li, M Sun, J Zhou
Previously Accepted by Findings of ACL 2022, 2021
62021
Enhancing recurrent neural networks with sememes
Y Qin, F Qi, S Ouyang, Z Liu, C Yang, Y Wang, Q Liu, M Sun
IEEE/ACM Transactions on Audio, Speech, and Language Processing 28, 2364-2373, 2020
52020
bert2BERT: Towards Reusable Pretrained Language Models
C Chen, Y Yin, L Shang, X Jiang, Y Qin, F Wang, Z Wang, X Chen, Z Liu, ...
ACL 2022, 2021
42021
ELLE: Efficient Lifelong Pre-training for Emerging Data
Y Qin, J Zhang, Y Lin, Z Liu, P Li, M Sun, J Zhou
Findings of ACL 2022, 2022
32022
On transferability of prompt tuning for natural language processing
Y Su, X Wang, Y Qin, CM Chan, Y Lin, H Wang, K Wen, Z Liu, P Li, J Li, ...
Proceedings of the 2022 Conference of the North American Chapter of the …, 2022
12022
ProQA: Structural Prompt-based Pre-training for Unified Question Answering
W Zhong, Y Gao, N Ding, Y Qin, Z Liu, M Zhou, J Wang, J Yin, N Duan
NAACL 2022, 2022
12022
Pass off Fish Eyes for Pearls: Attacking Model Selection of Pre-trained Models
B Zhu, Y Qin, F Qi, Y Deng, Z Liu, M Sun, M Gu
Proceedings of the 60th Annual Meeting of the Association for Computational …, 2022
2022
The system can't perform the operation now. Try again later.
Articles 1–14