Prati
Yi Tay
Yi Tay
Research Scientist, Google Brain
Potvrđena adresa e-pošte na google.com - Početna stranica
Naslov
Citirano
Citirano
Godina
Deep learning based recommender system: A survey and new perspectives
S Zhang, L Yao, A Sun, Y Tay
ACM Computing Surveys, 2017
29982017
Palm: Scaling language modeling with pathways
A Chowdhery, S Narang, J Devlin, M Bosma, G Mishra, A Roberts, ...
Journal of Machine Learning Research 24 (240), 1-113, 2023
29432023
Emergent abilities of large language models
J Wei, Y Tay, R Bommasani, C Raffel, B Zoph, S Borgeaud, D Yogatama, ...
Transactions of Machine Learning Research (TMLR), 2022
14512022
Scaling instruction-finetuned language models
HW Chung, L Hou, S Longpre, B Zoph, Y Tay, W Fedus, Y Li, X Wang, ...
arXiv preprint arXiv:2210.11416, 2022
13952022
Efficient Transformers: A Survey
Y Tay, M Dehghani, D Bahri, D Metzler
ACM Computing Surveys, 2022, 2022
1012*2022
Palm 2 technical report
R Anil, AM Dai, O Firat, M Johnson, D Lepikhin, A Passos, S Shakeri, ...
arXiv preprint arXiv:2305.10403, 2023
5962023
Quaternion Knowledge Graph Embedding
S Zhang*, Y Tay*, L Yao, Q Liu
NeurIPS 2019, 2019
4842019
Long Range Arena: A Benchmark for Efficient Transformers
Y Tay*, M Dehghani*, S Abnar, Y Shen, D Bahri, P Pham, J Rao, L Yang, ...
ICLR 2021, 2020
4512020
Multi-Pointer Co-Attention Networks for Recommendation
Y Tay, LA Tuan, SC Hui
KDD 2018, 2018
3292018
Latent Relational Metric Learning via Memory-based Attention for Collaborative Ranking
Y Tay, LA Tuan, SC Hui
Proceedings of WWW 2018, 2018
3252018
Synthesizer: Rethinking self-attention in transformer models
Y Tay, D Bahri, D Metzler, DC Juan, Z Zhao, C Zheng
ICML 2021, 2020
3142020
Sparse Sinkhorn Attention
Y Tay, D Bahri, L Yang, D Metzler, DC Juan
ICML 2020, 2020
2782020
The flan collection: Designing data and methods for effective instruction tuning
S Longpre, L Hou, T Vu, A Webson, HW Chung, Y Tay, D Zhou, QV Le, ...
arXiv preprint arXiv:2301.13688, 2023
2702023
Dive into Deep Learning: Recommender Systems
S Zhang, A Zhang, Y Tay
238*2019
Next item recommendation with self-attention
S Zhang, Y Tay, L Yao, A Sun
arXiv preprint arXiv:1808.06414, 2018
238*2018
UL2: Unifying Language Learning Paradigms
Y Tay, M Dehghani, VQ Tran, X Garcia, J Wei, X Wang, HW Chung, ...
ICLR 2023, 2022
233*2022
Challenging big-bench tasks and whether chain-of-thought can solve them
M Suzgun, N Scales, N Schärli, S Gehrmann, Y Tay, HW Chung, ...
arXiv preprint arXiv:2210.09261, 2022
2242022
Scaling vision transformers to 22 billion parameters
M Dehghani, J Djolonga, B Mustafa, P Padlewski, J Heek, J Gilmer, ...
International Conference on Machine Learning, 7480-7512, 2023
1982023
Learning to Attend via Word-Aspect Associative Fusion for Aspect-based Sentiment Analysis
Y Tay, AT Luu, SC Hui
Proceedings of AAAI 2018, 2018
1962018
Hyperbolic Representation Learning for Fast and Efficient Neural Question Answering
Y Tay, LA Tuan, SC Hui
WSDM 2018, 583-591, 2018
172*2018
Sustav trenutno ne može provesti ovu radnju. Pokušajte ponovo kasnije.
Članci 1–20