Improving grammatical error correction via pre-training a copy-augmented architecture with unlabeled data W Zhao, L Wang, K Shen, R Jia, J Liu arXiv preprint arXiv:1903.00138, 2019 | 227 | 2019 |
Simkgc: Simple contrastive knowledge graph completion with pre-trained language models L Wang, W Zhao, Z Wei, J Liu arXiv preprint arXiv:2203.02167, 2022 | 121 | 2022 |
Yuanfudao at semeval-2018 task 11: Three-way attention and relational knowledge for commonsense machine comprehension L Wang, M Sun, W Zhao, K Shen, J Liu arXiv preprint arXiv:1803.00191, 2018 | 85 | 2018 |
Denoising based sequence-to-sequence pre-training for text generation L Wang, W Zhao, R Jia, S Li, J Liu arXiv preprint arXiv:1908.08206, 2019 | 44 | 2019 |
Ape210k: A large-scale and template-rich dataset of math word problems W Zhao, M Shang, Y Liu, L Wang, J Liu arXiv preprint arXiv:2009.11506, 2020 | 41 | 2020 |
Multi-perspective context aggregation for semi-supervised cloze-style reading comprehension L Wang, S Li, W Zhao, K Shen, M Sun, R Jia, J Liu arXiv preprint arXiv:1808.06289, 2018 | 12 | 2018 |