Prati
Mostafa Dehghani
Mostafa Dehghani
Research Scientist, Google DeepMind
Potvrđena adresa e-pošte na google.com - Početna stranica
Naslov
Citirano
Citirano
Godina
An image is worth 16x16 words: Transformers for image recognition at scale
A Dosovitskiy, L Beyer, A Kolesnikov, D Weissenborn, X Zhai, ...
arXiv preprint arXiv:2010.11929, 2020
256392020
Vivit: A video vision transformer
A Arnab*, M Dehghani*, G Heigold, C Sun, M Lučić, C Schmid
arXiv preprint arXiv:2103.15691, 2021
13712021
Efficient Transformers survey
DM Yi Tay, Mostafa Dehghani, Dara Bahri
ACM Computing Survey 55 (6), 1–28, 2022
865*2022
Universal Transformers
M Dehghani, S Gouws, O Vinyals, J Uszkoreit, Ł Kaiser
International Conference on Learning Representations (ICLR), 2019
8042019
Scaling instruction-finetuned language models
HW Chung, L Hou, S Longpre, B Zoph, Y Tay, W Fedus, Y Li, X Wang, ...
arXiv preprint arXiv:2210.11416, 2022
7892022
Long Range Arena: A Benchmark for Efficient Transformers
Y Tay*, M Dehghani*, S Abnar, Y Shen, D Bahri, P Pham, J Rao, L Yang, ...
arXiv preprint arXiv:2011.04006, 2020
3862020
Neural Ranking Models with Weak Supervision
M Dehghani, H Zamani, A Severyn, J Kamps, WB Croft
The 40th International ACM SIGIR Conference on Research and Development in …, 2017
3852017
Palm 2 technical report
R Anil, AM Dai, O Firat, M Johnson, D Lepikhin, A Passos, S Shakeri, ...
arXiv preprint arXiv:2305.10403, 2023
3022023
Metnet: A neural weather model for precipitation forecasting
CK Sønderby, L Espeholt, J Heek, M Dehghani, A Oliver, T Salimans, ...
arXiv preprint arXiv:2003.12140, 2020
260*2020
Unifying language learning paradigms
Y Tay, M Dehghani, VQ Tran, X Garcia, D Bahri, T Schuster, HS Zheng, ...
arXiv preprint arXiv:2205.05131, 2022
167*2022
From neural re-ranking to neural ranking: Learning a sparse representation for inverted indexing
H Zamani, M Dehghani, WB Croft, E Learned-Miller, J Kamps
Proceedings of the 27th ACM international conference on information and …, 2018
1672018
Simple open-vocabulary object detection
M Minderer, A Gritsenko, A Stone, M Neumann, D Weissenborn, ...
European Conference on Computer Vision, 728-755, 2022
1642022
Parameter-efficient multi-task fine-tuning for transformers via shared hypernetworks
RK Mahabadi, S Ruder, M Dehghani, J Henderson
arXiv preprint arXiv:2106.04489, 2021
1622021
Scaling vision transformers to 22 billion parameters
M Dehghani, J Djolonga, B Mustafa, P Padlewski, J Heek, J Gilmer, ...
International Conference on Machine Learning, 7480-7512, 2023
1312023
Learning to Attend, Copy, and Generate for Session-Based Query Suggestion
M Dehghani, S Rothe, E Alfonseca, P Fleury
International Conference on Information and Knowledge Management (CIKM'17), 2017
1122017
Transformer memory as a differentiable search index
Y Tay, V Tran, M Dehghani, J Ni, D Bahri, H Mehta, Z Qin, K Hui, Z Zhao, ...
Advances in Neural Information Processing Systems 35, 21831-21843, 2022
1052022
Tokenlearner: Adaptive space-time tokenization for videos
M Ryoo, AJ Piergiovanni, A Arnab, M Dehghani, A Angelova
Advances in Neural Information Processing Systems 34, 12786-12797, 2021
1042021
Tokenlearner: What can 8 learned tokens do for images and videos?
MS Ryoo, AJ Piergiovanni, A Arnab, M Dehghani, A Angelova
arXiv preprint arXiv:2106.11297, 2021
902021
Exploring the limits of large scale pre-training
S Abnar, M Dehghani, B Neyshabur, H Sedghi
arXiv preprint arXiv:2110.02095, 2021
812021
The Efficiency Misnomer
M Dehghani, A Arnab, L Beyer, A Vaswani, Y Tay
The International Conference on Learning Representations (ICLR), 2022
742022
Sustav trenutno ne može provesti ovu radnju. Pokušajte ponovo kasnije.
Članci 1–20