Prati
Yanqi Zhou
Yanqi Zhou
Potvrđena adresa e-pošte na google.com - Početna stranica
Naslov
Citirano
Citirano
Godina
Exploring the limits of transfer learning with a unified text-to-text transformer
C Raffel, N Shazeer, A Roberts, K Lee, S Narang, M Matena, Y Zhou, W Li, ...
Journal of machine learning research 21 (140), 1-67, 2020
183742020
Lamda: Language models for dialog applications
R Thoppilan, D De Freitas, J Hall, N Shazeer, A Kulshreshtha, HT Cheng, ...
arXiv preprint arXiv:2201.08239, 2022
14252022
Deep learning scaling is predictable, empirically
J Hestness, S Narang, N Ardalani, G Diamos, H Jun, H Kianinejad, ...
arXiv preprint arXiv:1712.00409, 2017
7322017
Deep Voice 2: Multi-Speaker Neural Text-to-Speech
YZ Sercan Arik, Gregory Diamos, Andrew Gibiansky, John Miller, Kainan Peng ...
Neural Information Processing Systems (NIPS), 2017
644*2017
Glam: Efficient scaling of language models with mixture-of-experts
N Du, Y Huang, AM Dai, S Tong, D Lepikhin, Y Xu, M Krikun, Y Zhou, ...
International Conference on Machine Learning, 5547-5569, 2022
4872022
Neural voice cloning with a few samples
S Arik, J Chen, K Peng, W Ping, Y Zhou
Advances in neural information processing systems 31, 2018
4552018
OpenPiton: An open source manycore research framework
J Balkind, M McKeown, Y Fu, T Nguyen, Y Zhou, A Lavrov, M Shahrad, ...
ACM SIGPLAN Notices 51 (4), 217-232, 2016
2752016
Mixture-of-experts with expert choice routing
Y Zhou, T Lei, H Liu, N Du, Y Huang, V Zhao, AM Dai, QV Le, J Laudon
Advances in Neural Information Processing Systems 35, 7103-7114, 2022
2052022
Toju Duke, Lucas Dixon, Kun Zhang, Quoc V
N Du, Y Huang, AM Dai, S Tong, D Lepikhin, Y Xu, M Krikun, Y Zhou, ...
Le, Yonghui Wu, Zhifeng Chen, and Claire Cui, 2021
1482021
Atomic In-place Updates for Non-volatile Main Memories with Kamino-Tx
A Memaripour, A Badam, A Phanishayee, Y Zhou, R Alagappan, ...
EuroSys '17 Proceedings of the Twelfth European Conference on Computer …, 2017
1282017
Renelito Delos Santos
R Thoppilan, D De Freitas, J Hall, N Shazeer, A Kulshreshtha, HT Cheng, ...
972022
Do transformer modifications transfer across implementations and applications?
S Narang, HW Chung, Y Tay, W Fedus, T Fevry, M Matena, K Malkan, ...
arXiv preprint arXiv:2102.11972, 2021
962021
Exploring the limits of transfer learning with a unified text-to-text transformer. arXiv
C Raffel, N Shazeer, A Roberts, K Lee, S Narang, M Matena, Y Zhou, W Li, ...
arXiv preprint arXiv:1910.10683, 2019
872019
A learned performance model for tensor processing units
S Kaufman, P Phothilimthana, Y Zhou, C Mendis, S Roy, A Sabne, ...
Proceedings of Machine Learning and Systems 3, 387-400, 2021
782021
Resource-efficient neural architect
Y Zhou, S Ebrahimi, SÖ Arık, H Yu, H Liu, G Diamos
arXiv preprint arXiv:1806.07912, 2018
782018
MITTS: Memory inter-arrival time traffic shaping
Y Zhou, D Wentzlaff
ACM SIGARCH Computer Architecture News 44 (3), 532-544, 2016
632016
Power and Energy Characterization of an Open Source 25-Core Manycore Processor.
M McKeown, A Lavrov, M Shahrad, PJ Jackson, Y Fu, J Balkind, ...
HPCA, 762-775, 2018
572018
Transferable graph optimizers for ml compilers
Y Zhou, S Roy, A Abdolrashidi, D Wong, P Ma, Q Xu, H Liu, ...
Advances in Neural Information Processing Systems 33, 13844-13855, 2020
562020
Deep learning scaling is predictable
J Hestness, S Narang, N Ardalani, G Diamos, H Jun, H Kianinejad, ...
Empirically. arXiv 1712, 2, 2017
562017
Exploring the limits of transfer learning with a unified text-to-text transformer
A Roberts, C Raffel, K Lee, M Matena, N Shazeer, PJ Liu, S Narang, W Li, ...
Google, Tech. Rep., 2019
512019
Sustav trenutno ne može provesti ovu radnju. Pokušajte ponovo kasnije.
Članci 1–20