Hangbo Bao
Hangbo Bao
Microsoft Research
Potvrđena adresa e-pošte na - Početna stranica
BEiT: BERT Pre-Training of Image Transformers
H Bao, L Dong, S Piao, F Wei
International Conference on Learning Representations, 2022
Minilm: Deep self-attention distillation for task-agnostic compression of pre-trained transformers
W Wang, F Wei, L Dong, H Bao, N Yang, M Zhou
Advances in Neural Information Processing Systems 33, 5776-5788, 2020
Image as a Foreign Language: BEiT Pretraining for Vision and Vision-Language Tasks
W Wang, H Bao, L Dong, J Bjorck, Z Peng, Q Liu, K Aggarwal, ...
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern …, 2023
VLMo: Unified Vision-Language Pre-Training with Mixture-of-Modality-Experts
H Bao, W Wang, L Dong, Q Liu, OK Mohammed, K Aggarwal, S Som, ...
36th Conference on Neural Information Processing Systems (NeurIPS 2022), 2022
Neural question generation from text: A preliminary study
Q Zhou, N Yang, F Wei, C Tan, H Bao, M Zhou
Natural Language Processing and Chinese Computing: 6th CCF International …, 2018
Unilmv2: Pseudo-masked language models for unified language model pre-training
H Bao, L Dong, F Wei, W Wang, N Yang, X Liu, Y Wang, J Gao, S Piao, ...
International Conference on Machine Learning, 642-652, 2020
BEiT v2: Masked Image Modeling with Vector-Quantized Visual Tokenizers
Z Peng, L Dong, H Bao, Q Ye, F Wei
arXiv preprint arXiv:2208.06366, 2022
MiniLMv2: Multi-Head Self-Attention Relation Distillation for Compressing Pretrained Transformers
W Wang, H Bao, S Huang, L Dong, F Wei
arXiv preprint arXiv:2012.15828, 2020
Corrupted Image Modeling for Self-Supervised Visual Pre-Training
Y Fang, L Dong, H Bao, X Wang, F Wei
arXiv preprint arXiv:2202.03382, 2022
THE-X: Privacy-Preserving Transformer Inference with Homomorphic Encryption
T Chen, H Bao, S Huang, L Dong, B Jiao, D Jiang, H Zhou, J Li, F Wei
Findings of the Association for Computational Linguistics: ACL 2022, 3510-3520, 2022
Neural melody composition from lyrics
H Bao, S Huang, F Wei, L Cui, Y Wu, C Tan, S Piao, M Zhou
Natural Language Processing and Chinese Computing: 8th CCF International …, 2019
Attention Temperature Matters in Abstractive Summarization Distillation
S Zhang, X Zhang, H Bao, F Wei
arXiv preprint arXiv:2106.03441, 2021
Fine-tuning pretrained transformer encoders for sequence-to-sequence learning
H Bao, L Dong, W Wang, N Yang, S Piao, F Wei
International Journal of Machine Learning and Cybernetics 15 (5), 1711-1728, 2024
Learning to Sample Replacements for ELECTRA Pre-Training
Y Hao, L Dong, H Bao, K Xu, F Wei
arXiv preprint arXiv:2106.13715, 2021
Inspecting Unification of Encoding and Matching with Transformer: A Case Study of Machine Reading Comprehension
H Bao, L Dong, F Wei, W Wang, N Yang, L Cui, S Piao, M Zhou
Proceedings of the 2nd Workshop on Machine Reading for Question Answering, 14-18, 2019
A Unified View of Masked Image Modeling
Z Peng, L Dong, H Bao, Q Ye, F Wei
arXiv preprint arXiv:2210.10615, 2022
Sustav trenutno ne može provesti ovu radnju. Pokušajte ponovo kasnije.
Članci 1–16