Are we modeling the task or the annotator? an investigation of annotator bias in natural language understanding datasets M Geva, Y Goldberg, J Berant EMNLP 2019, 2019 | 240 | 2019 |
Injecting Numerical Reasoning Skills into Language Models M Geva, A Gupta, J Berant ACL 2020, 2020 | 132 | 2020 |
Break It Down: A Question Understanding Benchmark T Wolfson, M Geva, A Gupta, M Gardner, Y Goldberg, D Deutch, J Berant TACL, 2020 | 111 | 2020 |
Beyond the imitation game: Quantifying and extrapolating the capabilities of language models A Srivastava, A Rastogi, A Rao, AAM Shoeb, A Abid, A Fisch, AR Brown, ... arXiv preprint arXiv:2206.04615, 2022 | 106 | 2022 |
Did aristotle use a laptop? a question answering benchmark with implicit reasoning strategies M Geva, D Khashabi, E Segal, T Khot, D Roth, J Berant Transactions of the Association for Computational Linguistics 9, 346-361, 2021 | 91 | 2021 |
Transformer feed-forward layers are key-value memories M Geva, R Schuster, J Berant, O Levy arXiv preprint arXiv:2012.14913, 2020 | 87 | 2020 |
DiscoFuse: A Large-Scale Dataset for Discourse-based Sentence Fusion M Geva, E Malmi, I Szpektor, J Berant NAACL-HLT 2019 1, 3443-3455, 2019 | 38 | 2019 |
Emergence of communication in an interactive world with consistent speakers B Bogin, M Geva, J Berant arXiv preprint arXiv:1809.00549, 2018 | 32 | 2018 |
Transformer feed-forward layers build predictions by promoting concepts in the vocabulary space M Geva, A Caciularu, KR Wang, Y Goldberg arXiv preprint arXiv:2203.14680, 2022 | 20 | 2022 |
Scrolls: Standardized comparison over long language sequences U Shaham, E Segal, M Ivgi, A Efrat, O Yoran, A Haviv, A Gupta, W Xiong, ... arXiv preprint arXiv:2201.03533, 2022 | 17 | 2022 |
Learning to Search in Long Documents Using Document Structure M Geva, J Berant COLING 2018, 2018 | 14 | 2018 |
Evaluating semantic parsing against a simple web-based question answering model A Talmor, M Geva, J Berant *SEM 2018, 2017 | 14 | 2017 |
Break, perturb, build: Automatic perturbation of reasoning paths through question decomposition M Geva, T Wolfson, J Berant Transactions of the Association for Computational Linguistics 10, 111-126, 2022 | 9 | 2022 |
Lm-debugger: An interactive tool for inspection and intervention in transformer-based language models M Geva, A Caciularu, G Dar, P Roit, S Sadde, M Shlain, B Tamir, ... arXiv preprint arXiv:2204.12130, 2022 | 9 | 2022 |
Don't Blame the Annotator: Bias Already Starts in the Annotation Instructions M Parmar, S Mishra, M Geva, C Baral arXiv preprint arXiv:2205.00415, 2022 | 8 | 2022 |
Analyzing transformers in embedding space G Dar, M Geva, A Gupta, J Berant arXiv preprint arXiv:2209.02535, 2022 | 4 | 2022 |
What's in your Head? Emergent Behaviour in Multi-Task Transformer Models M Geva, U Katz, A Ben-Arie, J Berant arXiv preprint arXiv:2104.06129, 2021 | 4 | 2021 |
Inferring Implicit Relations with Language Models U Katz, M Geva, J Berant arXiv preprint arXiv:2204.13778, 2022 | 2 | 2022 |
Media management system for video data processing and adaptation data generation R Ronen, I Bar-Menachem, O Jassin, A Levi, O Nano, NIR Oron, ... US Patent 10,762,375, 2020 | 2 | 2020 |
Crawling the Internal Knowledge-Base of Language Models R Cohen, M Geva, J Berant, A Globerson arXiv preprint arXiv:2301.12810, 2023 | 1 | 2023 |