Prati
Mengjie Zhao
Mengjie Zhao
Sony Group Corporation
Potvrđena adresa e-pošte na cis.lmu.de - Početna stranica
Naslov
Citirano
Citirano
Godina
Masking as an Efficient Alternative to Finetuning for Pretrained Language Models
M Zhao, T Lin, F Mi, M Jaggi, H Schütze
Empirical Methods in Natural Language Processing (EMNLP), 2226--2241, 2020
802020
Continual learning for natural language generation in task-oriented dialog systems
F Mi, L Chen, M Zhao, M Huang, B Faltings
Findings of the Association for Computational Linguistics: EMNLP 2020, 3461 …, 2020
532020
A Closer Look at Few-Shot Crosslingual Transfer: The Choice of Shots Matters
M Zhao, Y Zhu, E Shareghi, I Vulić, R Reichart, A Korhonen, H Schütze
Annual Meeting of the Association for Computational Linguistics (ACL) 1 …, 2021
51*2021
Discrete and Soft Prompting for Multilingual Models
M Zhao, H Schütze
Empirical Methods in Natural Language Processing (EMNLP), 8547--8555, 2021
482021
GraphCode2Vec: Generic Code Embedding via Lexical and Program Dependence Analyses
W Ma, M Zhao, E Soremekun, Q Hu, J Zhang, M Papadakis, M Cordy, ...
IEEE/ACM The 2022 Mining Software Repositories Conference, 2022
272022
Quantifying the contextualization of word representations with semantic class probing
M Zhao, P Dufter, Y Yaghoobzadeh, H Schütze
Findings of the Association for Computational Linguistics: EMNLP 2020, 1219 …, 2020
252020
Embedding learning through multilingual concept induction
P Dufter, M Zhao, M Schmitt, A Fraser, H Schütze
Annual Meeting of the Association for Computational Linguistics (ACL) 1 …, 2018
232018
Modular and Parameter-Efficient Multimodal Fusion with Prompting
S Liang, M Zhao, H Schütze
Findings of the Association for Computational Linguistics: ACL 2022, 2022
222022
A multilingual bpe embedding space for universal sentiment lexicon induction
M Zhao, H Schütze
Annual Meeting of the Association for Computational Linguistics (ACL), 3506-3517, 2019
132019
LMTurk: Few-Shot Learners as Crowdsourcing Workers in a Language-Model-as-a-Service Framework
M Zhao, F Mi, Y Wang, M Li, X Jiang, Q Liu, H Schütze
Findings of the Association for Computational Linguistics: NAACL 2022, 2022
12*2022
Are Code Pre-trained Models Powerful to Learn Code Syntax and Semantics?
W Ma, S Liu, M Zhao, Q Hu, J Zhang, W Wang, Y Liu
arXiv preprint arXiv:2212.10017, 2022
7*2022
Multilingual Embeddings Jointly Induced from Contexts and Concepts: Simple, Strong and Scalable
P Dufter, M Zhao, H Schütze
arXiv:1811.00586, 2018
3*2018
Towards reporting bias in visual-language datasets: bimodal augmentation by decoupling object-attribute association
Q Wu, M Zhao, Y He, L Huang, J Ono, H Wakaki, Y Mitsufuji
arXiv preprint arXiv:2310.01330, 2023
12023
This joke is [MASK]: Recognizing Humor and Offense with Prompting
J Li, M Zhao, Y Xie, A Maronikolakis, P Pu, H Schütze
NeurIPS 2022 - Transfer Learning for Natural Language Processing Workshop, 2022
12022
Few-shot Dialogue Strategy Learning for Motivational Interviewing via Inductive Reasoning
Z Xie, BP Majumder, M Zhao, Y Maeda, K Yamada, H Wakaki, J McAuley
arXiv preprint arXiv:2403.15737, 2024
2024
DiffuCOMET: Contextual Commonsense Knowledge Diffusion
S Gao, M Ismayilzada, M Zhao, H Wakaki, Y Mitsufuji, A Bosselut
arXiv preprint arXiv:2402.17011, 2024
2024
Using Natural Language Inference to Improve Persona Extraction from Dialogue in a New Domain
A DeLucia, M Zhao, Y Maeda, M Yoda, K Yamada, H Wakaki
arXiv preprint arXiv:2401.06742, 2024
2024
On the Language Encoder of Contrastive Cross-modal Models
M Zhao, J Ono, Z Zhong, CH Lai, Y Takida, N Murata, WH Liao, T Shibuya, ...
arXiv preprint arXiv:2310.13267, 2023
2023
Efficient transfer learning with pretrained language models
M Zhao
Universität München, 2022
2022
Sustav trenutno ne može provesti ovu radnju. Pokušajte ponovo kasnije.
Članci 1–19