Follow
Hang Yan
Hang Yan
Computer Science, Fudan University
Verified email at fudan.edu.cn
Title
Cited by
Cited by
Year
FLAT: Chinese NER using flat-lattice transformer
X Li, H Yan, X Qiu, X Huang
arXiv preprint arXiv:2004.11795, 2020
4052020
TENER: adapting transformer encoder for named entity recognition
H Yan, B Deng, X Li, X Qiu
arXiv preprint arXiv:1911.04474, 2019
3442019
A unified generative framework for various NER subtasks
H Yan, T Gui, J Dai, Q Guo, Z Zhang, X Qiu
arXiv preprint arXiv:2106.01223, 2021
2362021
A unified generative framework for aspect-based sentiment analysis
H Yan, J Dai, X Qiu, Z Zhang
arXiv preprint arXiv:2106.04300, 2021
2112021
Does syntax matter? a strong baseline for aspect-based sentiment analysis with roberta
J Dai, H Yan, T Sun, P Liu, X Qiu
arXiv preprint arXiv:2104.04986, 2021
1372021
Cpt: A pre-trained unbalanced transformer for both chinese language understanding and generation
Y Shao, Z Geng, Y Liu, J Dai, H Yan, F Yang, Z Li, H Bao, X Qiu
Science China Information Sciences 67 (5), 1-13, 2024
1282024
Learning sparse sharing architectures for multiple tasks
T Sun, Y Shao, X Li, P Liu, H Yan, X Qiu, X Huang
Proceedings of the AAAI conference on artificial intelligence 34 (05), 8936-8943, 2020
1272020
Contrast and generation make bart a good dialogue emotion recognizer
S Li, H Yan, X Qiu
Proceedings of the AAAI conference on artificial intelligence 36 (10), 11002 …, 2022
572022
Internlm-xcomposer: A vision-language large model for advanced text-image comprehension and composition
P Zhang, XDB Wang, Y Cao, C Xu, L Ouyang, Z Zhao, S Ding, S Zhang, ...
arXiv preprint arXiv:2309.15112, 2023
542023
Unified demonstration retriever for in-context learning
X Li, K Lv, H Yan, T Lin, W Zhu, Y Ni, G Xie, X Wang, X Qiu
arXiv preprint arXiv:2305.04320, 2023
412023
A concise model for multi-criteria Chinese word segmentation with transformer encoder
X Qiu, H Pei, H Yan, X Huang
arXiv preprint arXiv:1906.12035, 2019
382019
A graph-based model for joint chinese word segmentation and dependency parsing
H Yan, X Qiu, X Huang
Transactions of the Association for Computational Linguistics 8, 78-92, 2020
37*2020
SpellBERT: A lightweight pretrained model for Chinese spelling check
T Ji, H Yan, X Qiu
Proceedings of the 2021 conference on empirical methods in natural language …, 2021
352021
Secrets of rlhf in large language models part i: Ppo
R Zheng, S Dou, S Gao, Y Hua, W Shen, B Wang, Y Liu, S Jin, Q Liu, ...
arXiv preprint arXiv:2307.04964, 2023
292023
Codeie: Large code generation models are better few-shot information extractors
P Li, T Sun, Q Tang, H Yan, Y Wu, X Huang, X Qiu
arXiv preprint arXiv:2305.05711, 2023
272023
Accelerating bert inference for sequence labeling via early-exit
X Li, Y Shao, T Sun, H Yan, X Qiu, X Huang
arXiv preprint arXiv:2105.13878, 2021
272021
InternLM-XComposer2: Mastering free-form text-image composition and comprehension in vision-language large model
X Dong, P Zhang, Y Zang, Y Cao, B Wang, L Ouyang, X Wei, S Zhang, ...
arXiv preprint arXiv:2401.16420, 2024
202024
An embarrassingly easy but strong baseline for nested named entity recognition
H Yan, Y Sun, X Li, X Qiu
arXiv preprint arXiv:2208.04534, 2022
202022
BERT for monolingual and cross-lingual reverse dictionary
H Yan, X Li, X Qiu
arXiv preprint arXiv:2009.14790, 2020
202020
Gaussian word embedding with a wasserstein distance loss
C Sun, H Yan, X Qiu, X Huang
arXiv preprint arXiv:1808.07016, 2018
182018
The system can't perform the operation now. Try again later.
Articles 1–20