Chenliang Li
Chenliang Li
Alibaba Inc.
Verified email at
Cited by
Cited by
Guiding generation for abstractive text summarization based on key information guide network
C Li, W Xu, S Li, S Gao
Proceedings of the 2018 Conference of the North American Chapter of the …, 2018
mplug-owl: Modularization empowers large language models with multimodality
Q Ye, H Xu, G Xu, J Ye, M Yan, Y Zhou, J Wang, A Hu, P Shi, Y Shi, C Li, ...
arXiv preprint arXiv:2304.14178, 2023
E2E-VLP: end-to-end vision-language pre-training enhanced by visual learning
H Xu, M Yan, C Li, B Bi, S Huang, W Xiao, F Huang
arXiv preprint arXiv:2106.01804, 2021
Structurallm: Structural pre-training for form understanding
C Li, B Bi, M Yan, W Wang, S Huang, F Huang, L Si
arXiv preprint arXiv:2105.11210, 2021
mplug: Effective and efficient vision-language learning by cross-modal skip-connections
C Li, H Xu, J Tian, W Wang, M Yan, B Bi, J Ye, H Chen, G Xu, Z Cao, ...
arXiv preprint arXiv:2205.12005, 2022
Palm: Pre-training an autoencoding&autoregressive language model for context-conditioned generation
B Bi, C Li, C Wu, M Yan, W Wang, S Huang, F Huang, L Si
arXiv preprint arXiv:2004.07159, 2020
Incorporating external knowledge into machine reading for generative question answering
B Bi, C Wu, M Yan, W Wang, J Xia, C Li
arXiv preprint arXiv:1909.02745, 2019
mplug-2: A modularized multi-modal foundation model across text, image and video
H Xu, Q Ye, M Yan, Y Shi, J Ye, Y Xu, C Li
arXiv preprint arXiv:2302.00402 3, 2023
Bin Bi, Wei Wang, Jiangnan Xia, and Luo Si. 2019. IDST at TREC 2019 deep learning track: Deep cascade ranking with generation-based document expansion and pre-trained language …
M Yan, C Li, C Wu
Proceedings of the Twenty-Eighth Text REtrieval Conference, TREC, 13-15, 2019
Multi-task learning for abstractive text summarization with key information guide network
W Xu, C Li, M Lee, C Zhang
EURASIP Journal on Advances in Signal Processing 2020 (1), 1-11, 2020
Semvlp: Vision-language pre-training by aligning semantics at multiple levels
C Li, M Yan, H Xu, F Luo, W Wang, B Bi, S Huang
arXiv preprint arXiv:2103.07829, 2021
Bin Bi
C Li, H Xu, J Tian, W Wang, M Yan
Jiabo Ye, Hehong Chen, Guohai Xu, Zheng Cao, Ji Zhang, Songfang Huang, Fei …, 2022
IDST at TREC 2019 Deep Learning Track: Deep Cascade Ranking with Generation-based Document Expansion and Pre-trained Language Modeling.
M Yan, C Li, C Wu, B Bi, W Wang, J Xia, L Si
TREC, 2019
A unified pretraining framework for passage ranking and expansion
M Yan, C Li, B Bi, W Wang, S Huang
Proceedings of the AAAI Conference on Artificial Intelligence 35 (5), 4555-4563, 2021
Bin Bi, and Songfang Huang. Semvlp: Vision-language pre-training by aligning semantics at multiple levels
C Li, M Yan, H Xu, F Luo, W Wang
arXiv preprint arXiv:2103.07829 3, 2021
Mind at semeval-2021 task 6: Propaganda detection using transfer learning and multimodal fusion
J Tian, M Gui, C Li, M Yan, W Xiao
Proceedings of the 15th International Workshop on Semantic Evaluation …, 2021
mplug-docowl: Modularized multimodal large language model for document understanding
J Ye, A Hu, H Xu, Q Ye, M Yan, Y Dan, C Zhao, G Xu, C Li, J Tian, Q Qi, ...
arXiv preprint arXiv:2307.02499, 2023
Bi-vldoc: Bidirectional vision-language modeling for visually-rich document understanding
C Luo, G Tang, Q Zheng, C Yao, L Jin, C Li, Y Xue, L Si
arXiv preprint arXiv:2206.13155, 2022
Addressing semantic drift in generative question answering with auxiliary extraction
C Li, B Bi, M Yan, W Wang, S Huang
Proceedings of the 59th Annual Meeting of the Association for Computational …, 2021
Grid-vlp: Revisiting grid features for vision-language pre-training
M Yan, H Xu, C Li, B Bi, J Tian, M Gui, W Wang
arXiv preprint arXiv:2108.09479, 2021
The system can't perform the operation now. Try again later.
Articles 1–20