Prati
Mengjie Zhao
Mengjie Zhao
Potvrđena adresa e-pošte na cis.lmu.de - Početna stranica
Naslov
Citirano
Citirano
Godina
Masking as an Efficient Alternative to Finetuning for Pretrained Language Models
M Zhao, T Lin, F Mi, M Jaggi, H Schütze
Empirical Methods in Natural Language Processing (EMNLP), 2226--2241, 2020
532020
Continual learning for natural language generation in task-oriented dialog systems
F Mi, L Chen, M Zhao, M Huang, B Faltings
Findings of the Association for Computational Linguistics: EMNLP 2020, 3461 …, 2020
342020
A Closer Look at Few-Shot Crosslingual Transfer: The Choice of Shots Matters
M Zhao, Y Zhu, E Shareghi, I Vulić, R Reichart, A Korhonen, H Schütze
Annual Meeting of the Association for Computational Linguistics (ACL) 1 …, 2021
31*2021
Discrete and Soft Prompting for Multilingual Models
M Zhao, H Schütze
Empirical Methods in Natural Language Processing (EMNLP), 8547--8555, 2021
222021
Embedding learning through multilingual concept induction
P Dufter, M Zhao, M Schmitt, A Fraser, H Schütze
Annual Meeting of the Association for Computational Linguistics (ACL) 1 …, 2018
212018
Quantifying the contextualization of word representations with semantic class probing
M Zhao, P Dufter, Y Yaghoobzadeh, H Schütze
Findings of the Association for Computational Linguistics: EMNLP 2020, 1219 …, 2020
192020
GraphCode2Vec: Generic Code Embedding via Lexical and Program Dependence Analyses
W Ma, M Zhao, E Soremekun, Q Hu, J Zhang, M Papadakis, M Cordy, ...
IEEE/ACM The 2022 Mining Software Repositories Conference, 2022
102022
A multilingual bpe embedding space for universal sentiment lexicon induction
M Zhao, H Schütze
Annual Meeting of the Association for Computational Linguistics (ACL), 3506-3517, 2019
92019
LMTurk: Few-Shot Learners as Crowdsourcing Workers in a Language-Model-as-a-Service Framework
M Zhao, F Mi, Y Wang, M Li, X Jiang, Q Liu, H Schütze
Findings of the Association for Computational Linguistics: NAACL 2022, 2022
7*2022
Modular and Parameter-Efficient Multimodal Fusion with Prompting
S Liang, M Zhao, H Schütze
Findings of the Association for Computational Linguistics: ACL 2022, 2022
42022
Multilingual Embeddings Jointly Induced from Contexts and Concepts: Simple, Strong and Scalable
P Dufter, M Zhao, H Schütze
arXiv:1811.00586, 2018
3*2018
Is Self-Attention Powerful to Learn Code Syntax and Semantics?
W Ma, M Zhao, X Xie, Q Hu, S Liu, J Zhang, W Wang, Y Liu
arXiv preprint arXiv:2212.10017, 2022
2022
This joke is [MASK]: Recognizing Humor and Offense with Prompting
J Li, M Zhao, Y Xie, A Maronikolakis, P Pu, H Schütze
NeurIPS 2022 - Transfer Learning for Natural Language Processing Workshop, 2022
2022
Efficient transfer learning with pretrained language models
M Zhao
Universität München, 2022
2022
Sustav trenutno ne može provesti ovu radnju. Pokušajte ponovo kasnije.
Članci 1–14