Prati
Kumar Kshitij Patel
Kumar Kshitij Patel
Phd Student, TTIC
Potvrđena adresa e-pošte na ttic.edu - Početna stranica
Naslov
Citirano
Citirano
Godina
Don't use large mini-batches, use local sgd
T Lin, SU Stich, KK Patel, M Jaggi
arXiv preprint arXiv:1808.07217, 2018
3352018
Is local SGD better than minibatch SGD?
B Woodworth, KK Patel, S Stich, Z Dai, B Bullins, B Mcmahan, O Shamir, ...
International Conference on Machine Learning, 10334-10343, 2020
1462020
Minibatch vs local sgd for heterogeneous distributed learning
BE Woodworth, KK Patel, N Srebro
Advances in Neural Information Processing Systems 33, 6281-6292, 2020
1052020
Communication trade-offs for local-sgd with large step size
A Dieuleveut, KK Patel
Advances in Neural Information Processing Systems 32, 2019
55*2019
Corruption-tolerant bandit learning
S Kapoor, KK Patel, P Kar
Machine Learning 108 (4), 687-715, 2019
382019
A stochastic newton algorithm for distributed convex optimization
B Bullins, K Patel, O Shamir, N Srebro, BE Woodworth
Advances in Neural Information Processing Systems 34, 26818-26830, 2021
62021
Communication trade-offs for Local-SGD with large step size
KK Patel, A Dieuleveut
Proceedings of the 33rd International Conference on Neural Information …, 2019
32019
Towards Optimal Communication Complexity in Distributed Non-Convex Optimization
KK Patel, L Wang, B Woodworth, B Bullins, N Srebro
Advances in Neural Information Systems 36, 2022
2022
Distributed Online and Bandit Convex Optimization
KK Patel, A Saha, L Wang, N Srebro
OPT 2022: Optimization for Machine Learning (NeurIPS 2022 Workshop), 0
On Convexity and Linear Mode Connectivity in Neural Networks
D Yunis, KK Patel, PHP Savarese, G Vardi, J Frankle, M Walter, K Livescu, ...
OPT 2022: Optimization for Machine Learning (NeurIPS 2022 Workshop), 0
Sustav trenutno ne može provesti ovu radnju. Pokušajte ponovo kasnije.
Članci 1–10