Gemini: a family of highly capable multimodal models G Team, R Anil, S Borgeaud, JB Alayrac, J Yu, R Soricut, J Schalkwyk, ... arXiv preprint arXiv:2312.11805, 2023 | 1920 | 2023 |
Gemini 1.5: Unlocking multimodal understanding across millions of tokens of context G Team, P Georgiev, VI Lei, R Burnell, L Bai, A Gulati, G Tanzer, ... arXiv preprint arXiv:2403.05530, 2024 | 536 | 2024 |
Query focused abstractive summarization: Incorporating query relevance, multi-document coverage, and summary length constraints into seq2seq models T Baumel, M Eyal, M Elhadad arXiv preprint arXiv:1801.07704, 2018 | 136 | 2018 |
Question answering as an automatic evaluation metric for news article summarization M Eyal, T Baumel, M Elhadad arXiv preprint arXiv:1906.00318, 2019 | 132 | 2019 |
Diffeomorphic temporal alignment nets RA Shapira Weber, M Eyal, N Skafte, O Shriki, O Freifeld Advances in neural information processing systems 32, 2019 | 41 | 2019 |
Does Fine-Tuning LLMs on New Knowledge Encourage Hallucinations? Z Gekhman, G Yona, R Aharoni, M Eyal, A Feder, R Reichart, J Herzig arXiv preprint arXiv:2405.05904, 2024 | 38 | 2024 |
Interactive extractive search over biomedical corpora HT Tabib, M Shlain, S Sadde, D Lahav, M Eyal, Y Cohen, Y Goldberg Proceedings of the 19th SIGBioMed Workshop on Biomedical Language Processing …, 2020 | 27 | 2020 |
Multilingual instruction tuning with just a pinch of multilinguality U Shaham, J Herzig, R Aharoni, I Szpektor, R Tsarfaty, M Eyal arXiv preprint arXiv:2401.01854, 2024 | 15 | 2024 |
Large scale substitution-based word sense induction M Eyal, S Sadde, H Taub-Tabib, Y Goldberg arXiv preprint arXiv:2110.07681, 2021 | 12 | 2021 |
Interactive extractive search over biomedical corpora H Taub-Tabib, M Shlain, S Sadde, D Lahav, M Eyal, Y Cohen, Y Goldberg arXiv preprint arXiv:2006.04148, 2020 | 8 | 2020 |
Bootstrapping relation extractors using syntactic search by examples M Eyal, A Amrami, H Taub-Tabib, Y Goldberg arXiv preprint arXiv:2102.05007, 2021 | 7 | 2021 |
Unpacking Tokenization: Evaluating Text Compression and its Correlation with Model Performance O Goldman, A Caciularu, M Eyal, K Cao, I Szpektor, R Tsarfaty arXiv preprint arXiv:2403.06265, 2024 | 5 | 2024 |
Multilingual sequence-to-sequence models for hebrew NLP M Eyal, H Noga, R Aharoni, I Szpektor, R Tsarfaty arXiv preprint arXiv:2212.09682, 2022 | 4 | 2022 |
Breaking the Language Barrier: Can Direct Inference Outperform Pre-Translation in Multilingual LLM Applications? Y Intrator, M Halfon, R Goldenberg, R Tsarfaty, M Eyal, E Rivlin, Y Matias, ... arXiv preprint arXiv:2403.04792, 2024 | 2 | 2024 |
The Hidden Space of Transformer Language Adapters JO Alabi, M Mosbach, M Eyal, D Klakow, M Geva arXiv preprint arXiv:2402.13137, 2024 | 1 | 2024 |
Question Answering as an Automatic Summarization Evaluation Metric M Eyal Ben-Gurion University of the Negev, 2018 | | 2018 |