Prati
Dmitry Kovalev
Naslov
Citirano
Citirano
Godina
Stochastic distributed learning with gradient quantization and double-variance reduction
S Horváth, D Kovalev, K Mishchenko, P Richtárik, S Stich
Optimization Methods and Software, 1-16, 2022
1262022
Don’t jump through hoops and remove those loops: SVRG and Katyusha are better without the outer loop
D Kovalev, S Horváth, P Richtárik
Algorithmic Learning Theory, 451-467, 2020
1162020
Acceleration for compressed gradient descent in distributed and federated optimization
Z Li, D Kovalev, X Qian, P Richtárik
arXiv preprint arXiv:2002.11364, 2020
972020
From local SGD to local fixed-point methods for federated learning
G Malinovskiy, D Kovalev, E Gasanov, L Condat, P Richtarik
International Conference on Machine Learning, 6692-6701, 2020
722020
Linearly converging error compensated SGD
E Gorbunov, D Kovalev, D Makarenko, P Richtárik
Advances in Neural Information Processing Systems 33, 20889-20900, 2020
492020
Rsn: Randomized subspace newton
R Gower, D Kovalev, F Lieder, P Richtárik
Advances in Neural Information Processing Systems 32, 2019
492019
Revisiting stochastic extragradient
K Mishchenko, D Kovalev, E Shulgin, P Richtárik, Y Malitsky
International Conference on Artificial Intelligence and Statistics, 4573-4582, 2020
482020
Optimal and practical algorithms for smooth and strongly convex decentralized optimization
D Kovalev, A Salim, P Richtárik
Advances in Neural Information Processing Systems 33, 18342-18352, 2020
472020
A linearly convergent algorithm for decentralized optimization: Sending less bits for free!
D Kovalev, A Koloskova, M Jaggi, P Richtarik, S Stich
International Conference on Artificial Intelligence and Statistics, 4087-4095, 2021
442021
Stochastic Newton and cubic Newton methods with simple local linear-quadratic rates
D Kovalev, K Mishchenko, P Richtárik
arXiv preprint arXiv:1912.01597, 2019
322019
Decentralized distributed optimization for saddle point problems
A Rogozin, A Beznosikov, D Dvinskikh, D Kovalev, P Dvurechensky, ...
arXiv preprint arXiv:2102.07758, 2021
312021
Accelerated methods for saddle-point problem
MS Alkousa, AV Gasnikov, DM Dvinskikh, DA Kovalev, FS Stonyakin
Computational Mathematics and Mathematical Physics 60, 1787-1809, 2020
262020
On accelerated methods for saddle-point problems with composite structure
V Tominin, Y Tominin, E Borodich, D Kovalev, A Gasnikov, ...
arXiv preprint arXiv:2103.09344, 2021
222021
Towards accelerated rates for distributed optimization over time-varying networks
A Rogozin, V Lukoshkin, A Gasnikov, D Kovalev, E Shulgin
Optimization and Applications: 12th International Conference, OPTIMA 2021 …, 2021
222021
Accelerated methods for composite non-bilinear saddle point problem
M Alkousa, D Dvinskikh, F Stonyakin, A Gasnikov, D Kovalev
arXiv preprint arXiv:1906.03620, 2019
222019
ADOM: Accelerated decentralized optimization method for time-varying networks
D Kovalev, E Shulgin, P Richtárik, AV Rogozin, A Gasnikov
International Conference on Machine Learning, 5784-5793, 2021
192021
Lower bounds and optimal algorithms for smooth and strongly convex decentralized optimization over time-varying networks
D Kovalev, E Gasanov, A Gasnikov, P Richtarik
Advances in Neural Information Processing Systems 34, 22325-22335, 2021
182021
Distributed fixed point methods with compressed iterates
S Chraibi, A Khaled, D Kovalev, P Richtárik, A Salim, M Takáč
arXiv preprint arXiv:1912.09925, 2019
182019
Accelerated primal-dual gradient method for smooth and convex-concave saddle-point problems with bilinear coupling
D Kovalev, A Gasnikov, P Richtárik
arXiv preprint arXiv:2112.15199, 2021
152021
Variance reduced coordinate descent with acceleration: New method with a surprising application to finite-sum problems
F Hanzely, D Kovalev, P Richtárik
International Conference on Machine Learning, 4039-4048, 2020
152020
Sustav trenutno ne može provesti ovu radnju. Pokušajte ponovo kasnije.
Članci 1–20