Prati
Shuohang Wang
Shuohang Wang
Senior Researcher, Microsoft
Potvrđena adresa e-pošte na microsoft.com
Naslov
Citirano
Citirano
Godina
Machine Comprehension Using Match-LSTM and Answer Pointer
S Wang, J Jiang
International Conference on Learning Representations (ICLR 2017), 2017
6592017
Learning Natural Language Inference with LSTM
S Wang, J Jiang
The 15th Annual Conference of the North American Chapter of the Association …, 2016
5182016
R3: Reinforced Ranker-Reader for Open-Domain Question Answering
S Wang, M Yu, X Guo, Z Wang, T Klinger, W Zhang, S Chang, G Tesauro, ...
The 32nd AAAI Conference on Artificial Intelligence (AAAI), 2018
3732018
A Compare-Aggregate Model for Matching Text Sequences
S Wang, J Jiang
International Conference on Learning Representations (ICLR 2017), 2017
3122017
GPTEval: NLG Evaluation using GPT-4 with Better Human Alignment
Y Liu, D Iter, Y Xu, S Wang, R Xu, C Zhu
arXiv preprint arXiv:2303.16634, 2023
2762023
An empirical study of training end-to-end vision-and-language transformers
ZY Dou, Y Xu, Z Gan, J Wang, S Wang, L Wang, C Zhu, P Zhang, L Yuan, ...
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern …, 2022
2622022
Hierarchical Graph Network for Multi-hop Question Answering
Y Fang, S Sun, Z Gan, R Pillai, S Wang, J Liu
arXiv preprint arXiv:1911.03631, 2019
1962019
Evidence Aggregation for Answer Re-Ranking in Open-Domain Question Answering
S Wang, M Yu, J Jiang, W Zhang, X Guo, S Chang, Z Wang, T Klinger, ...
International Conference on Learning Representations (ICLR 2018), 2018
1952018
Want To Reduce Labeling Cost? GPT-3 Can Help
S Wang, Y Liu, Y Xu, C Zhu, M Zeng
arXiv preprint arXiv:2108.13487, 2021
1312021
Prompting GPT-3 To Be Reliable
C Si, Z Gan, Z Yang, S Wang, J Wang, J Boyd-Graber, L Wang
arXiv preprint arXiv:2210.09150, 2022
1262022
Generate rather than retrieve: Large language models are strong context generators
W Yu, D Iter, S Wang, Y Xu, M Ju, S Sanyal, C Zhu, M Zeng, M Jiang
arXiv preprint arXiv:2209.10063, 2022
1192022
A Co-Matching Model for Multi-choice Reading Comprehension
S Wang, M Yu, S Chang, J Jiang
Annual Meeting of the Association for Computational Linguistics (ACL), 2018
1072018
InfoBERT: Improving Robustness of Language Models from An Information Theoretic Perspective
B Wang, S Wang, Y Cheng, Z Gan, R Jia, B Li, J Liu
arXiv preprint arXiv:2010.02329, 2020
1022020
Multi-Fact Correction in Abstractive Text Summarization
Y Dong, S Wang, Z Gan, Y Cheng, JCK Cheung, J Liu
arXiv preprint arXiv:2010.02443, 2020
1012020
Adversarial GLUE: A Multi-Task Benchmark for Robustness Evaluation of Language Models
B Wang, C Xu, S Wang, Z Gan, Y Cheng, J Gao, AH Awadallah, B Li
arXiv preprint arXiv:2111.02840, 2021
992021
Simple and Effective Curriculum Pointer-Generator Networks for Reading Comprehension over Long Narratives
Y Tay, S Wang, LA Tuan, J Fu, MC Phan, X Yuan, J Rao, SC Hui, A Zhang
Annual Meeting of the Association for Computational Linguistics (ACL), 2019
952019
Clip-event: Connecting text and images with event structures
M Li, R Xu, S Wang, L Zhou, X Lin, C Zhu, M Zeng, H Ji, SF Chang
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern …, 2022
942022
Training Data is More Valuable than You Think: A Simple and Effective Method by Retrieving from Training Data
S Wang, Y Xu, Y Fang, Y Liu, S Sun, R Xu, C Zhu, M Zeng
arXiv preprint arXiv:2203.08773, 2022
792022
LightningDOT: Pre-training Visual-Semantic Embeddings for Real-Time Image-Text Retrieval
S Sun, YC Chen, L Li, S Wang, Y Fang, J Liu
Proceedings of the 2021 Conference of the North American Chapter of the …, 2021
772021
KG-FiD: Infusing Knowledge Graph in Fusion-in-Decoder for Open-Domain Question Answering
D Yu, C Zhu, Y Fang, W Yu, S Wang, Y Xu, X Ren, Y Yang, M Zeng
arXiv preprint arXiv:2110.04330, 2021
752021
Sustav trenutno ne može provesti ovu radnju. Pokušajte ponovo kasnije.
Članci 1–20