Follow
Rohan Anil
Rohan Anil
Principal Engineer, Google Brain
Verified email at google.com
Title
Cited by
Cited by
Year
Wide & deep learning for recommender systems
HT Cheng, L Koc, J Harmsen, T Shaked, T Chandra, H Aradhye, ...
Proceedings of the 1st workshop on deep learning for recommender systems, 7-10, 2016
25352016
Large scale distributed neural network training through online distillation
R Anil, G Pereyra, AT Passos, R Ormandi, G Dahl, G Hinton
Sixth International Conference on Learning Representations, 2018
3102018
Lingvo: a modular and scalable framework for sequence-to-sequence modeling
J Shen, P Nguyen, Y Wu, Z Chen, MX Chen, Y Jia, A Kannan, T Sainath, ...
arXiv preprint arXiv:1902.08295, 2019
1482019
Tf-ranking: Scalable tensorflow library for learning-to-rank
RK Pasumarthi, S Bruch, X Wang, C Li, M Bendersky, M Najork, J Pfeifer, ...
Proceedings of the 25th ACM SIGKDD International Conference on Knowledge …, 2019
1082019
Robust bi-tempered logistic loss based on bregman divergences
E Amid, MK Warmuth, R Anil, T Koren
2019 Conference on Neural Information Processing Systems, 2019
742019
Scalable Second Order Optimization for Deep Learning
R Anil, V Gupta, T Koren, K Regan, Y Singer
arXiv preprint arXiv:2002.09018, 2020, 2020
45*2020
Memory-efficient adaptive optimization for large-scale learning
R Anil, V Gupta, T Koren, Y Singer
2019 Conference on Neural Information Processing Systems, 2019
40*2019
Efficiently Identifying Task Groupings for Multi-Task Learning
C Fifty, E Amid, Z Zhao, T Yu, R Anil, C Finn
2021 Conference on Neural Information Processing Systems, Spotlight, 2021
34*2021
Knowledge distillation: A good teacher is patient and consistent
L Beyer, X Zhai, A Royer, L Markeeva, R Anil, A Kolesnikov
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern …, 2022
322022
Large-Scale Differentially Private BERT
R Anil, B Ghazi, V Gupta, R Kumar, P Manurangsi
Privacy Preserving Machine Learning, 2021
242021
Disentangling adaptive gradient methods from learning rates
N Agarwal, R Anil, E Hazan, T Koren, C Zhang
arXiv preprint arXiv:2002.11803, 2020
22*2020
A large batch optimizer reality check: Traditional, generic optimizers suffice across batch sizes
Z Nado, JM Gilmer, CJ Shallue, R Anil, GE Dahl
arXiv preprint arXiv:2102.06356, 2021
202021
Wide and deep machine learning models
T Shaked, R Anil, HB Aradhye, G Anderson, W Chai, ML Koc, J Harmsen, ...
US Patent 10,762,422, 2020
162020
Stochastic Optimization with Laggard Data Pipelines
N Agarwal, R Anil, T Koren, K Talwar, C Zhang
2020 Conference on Neural Information Processing Systems, 2020
82020
Locoprop: Enhancing backprop via local loss optimization
E Amid, R Anil, MK Warmuth
The 25th International Conference on Artificial Intelligence and Statistics …, 2021
72021
Learning from Randomly Initialized Neural Network Features
E Amid, R Anil, W Kotłowski, MK Warmuth
arXiv preprint arXiv:2202.06438, 2022
22022
Step-size Adaptation Using Exponentiated Gradient Updates
E Amid, R Anil, C Fifty, MK Warmuth
ICML’20 Workshop on “Beyond First Order Methods in ML", Spotlight, 2020
22020
Layerwise Bregman Representation Learning with Applications to Knowledge Distillation
E Amid, R Anil, C Fifty, MK Warmuth
arXiv preprint arXiv:2209.07080, 2022
2022
On the Factory Floor: ML Engineering for Industrial-Scale Ads Recommendation Models
R Anil, S Gadanho, D Huang, N Jacob, Z Li, D Lin, T Phillips, C Pop, ...
arXiv preprint arXiv:2209.05310, 2022
2022
TRAINING NEURAL NETWORKS USING LAYER-WISE LOSSES
E Amid, R Anil, MK Warmuth
US Patent App. 17/666,488, 2022
2022
The system can't perform the operation now. Try again later.
Articles 1–20