Hangbo Bao
Cited by
Cited by
BEiT: BERT Pre-Training of Image Transformers
H Bao, L Dong, S Piao, F Wei
International Conference on Learning Representations, 2022
Neural question generation from text: A preliminary study
Q Zhou, N Yang, F Wei, C Tan, H Bao, M Zhou
National CCF Conference on Natural Language Processing and Chinese Computing …, 2017
Minilm: Deep self-attention distillation for task-agnostic compression of pre-trained transformers
W Wang, F Wei, L Dong, H Bao, N Yang, M Zhou
Advances in Neural Information Processing Systems 33, 5776-5788, 2020
Unilmv2: Pseudo-masked language models for unified language model pre-training
H Bao, L Dong, F Wei, W Wang, N Yang, X Liu, Y Wang, J Gao, S Piao, ...
International Conference on Machine Learning, 642-652, 2020
VLMo: Unified Vision-Language Pre-Training with Mixture-of-Modality-Experts
W Wang, H Bao, L Dong, F Wei
arXiv preprint arXiv:2111.02358, 2021
MiniLMv2: Multi-Head Self-Attention Relation Distillation for Compressing Pretrained Transformers
W Wang, H Bao, S Huang, L Dong, F Wei
arXiv preprint arXiv:2012.15828, 2020
Neural melody composition from lyrics
H Bao, S Huang, F Wei, L Cui, Y Wu, C Tan, S Piao, M Zhou
CCF International Conference on Natural Language Processing and Chinese …, 2019
Corrupted Image Modeling for Self-Supervised Visual Pre-Training
Y Fang, L Dong, H Bao, X Wang, F Wei
arXiv preprint arXiv:2202.03382, 2022
Inspecting Unification of Encoding and Matching with Transformer: A Case Study of Machine Reading Comprehension
H Bao, L Dong, F Wei, W Wang, N Yang, L Cui, S Piao, M Zhou
Proceedings of the 2nd Workshop on Machine Reading for Question Answering, 14-18, 2019
VL-BEiT: Generative Vision-Language Pretraining
H Bao, W Wang, L Dong, F Wei
arXiv preprint arXiv:2206.01127, 2022
s2s-ft: Fine-Tuning Pretrained Transformer Encoders for Sequence-to-Sequence Learning
H Bao, L Dong, W Wang, N Yang, F Wei
arXiv preprint arXiv:2110.13640, 2021
BEiT v2: Masked Image Modeling with Vector-Quantized Visual Tokenizers
Z Peng, L Dong, H Bao, Q Ye, F Wei
arXiv preprint arXiv:2208.06366, 2022
Learning to Sample Replacements for ELECTRA Pre-Training
Y Hao, L Dong, H Bao, K Xu, F Wei
arXiv preprint arXiv:2106.13715, 2021
Image as a Foreign Language: BEiT Pretraining for All Vision and Vision-Language Tasks
W Wang, H Bao, L Dong, J Bjorck, Z Peng, Q Liu, K Aggarwal, ...
arXiv preprint arXiv:2208.10442, 2022
THE-X: Privacy-Preserving Transformer Inference with Homomorphic Encryption
T Chen, H Bao, S Huang, L Dong, B Jiao, D Jiang, H Zhou, J Li, F Wei
Findings of the Association for Computational Linguistics: ACL 2022, 3510-3520, 2022
Attention Temperature Matters in Abstractive Summarization Distillation
S Zhang, X Zhang, H Bao, F Wei
arXiv preprint arXiv:2106.03441, 2021
The system can't perform the operation now. Try again later.
Articles 1–16