Follow
Aleksandr Beznosikov
Aleksandr Beznosikov
Verified email at phystech.edu - Homepage
Title
Cited by
Cited by
Year
On biased compression for distributed learning
A Beznosikov, S Horváth, P Richtárik, M Safaryan
NeurIPS 2020, Workshop on Scalability, Privacy, and Security in Federated …, 2020
802020
Derivative-free method for composite optimization with applications to decentralized distributed optimization
A Beznosikov, E Gorbunov, A Gasnikov
IFAC-PapersOnLine 53 (2), 4038-4043, 2020
30*2020
Decentralized distributed optimization for saddle point problems
A Rogozin, A Beznosikov, D Dvinskikh, D Kovalev, P Dvurechensky, ...
arXiv preprint arXiv:2102.07758, 2021
262021
Recent theoretical advances in decentralized distributed convex optimization
E Gorbunov, A Rogozin, A Beznosikov, D Dvinskikh, A Gasnikov
High-Dimensional Optimization and Probability, 253-325, 2022
222022
Distributed Saddle-Point Problems: Lower Bounds, Optimal and Robust Algorithms
A Beznosikov, V Samokhin, A Gasnikov
arXiv preprint arXiv:2010.13112, 2021
22*2021
Gradient-free methods with inexact oracle for convex-concave stochastic saddle-point problem
A Beznosikov, A Sadiev, A Gasnikov
International Conference on Mathematical Optimization Theory and Operations …, 2020
202020
Solving smooth min-min and min-max problems by mixed oracle algorithms
E Gladin, A Sadiev, A Gasnikov, P Dvurechensky, A Beznosikov, ...
International Conference on Mathematical Optimization Theory and Operations …, 2021
162021
Decentralized local stochastic extra-gradient for variational inequalities
A Beznosikov, P Dvurechensky, A Koloskova, V Samokhin, SU Stich, ...
arXiv preprint arXiv:2106.08315, 2021
152021
Distributed saddle-point problems under data similarity
A Beznosikov, G Scutari, A Rogozin, A Gasnikov
Advances in Neural Information Processing Systems 34, 8172-8184, 2021
122021
Zeroth-order algorithms for smooth saddle-point problems
A Sadiev, A Beznosikov, P Dvurechensky, A Gasnikov
International Conference on Mathematical Optimization Theory and Operations …, 2021
122021
Stochastic gradient descent-ascent: Unified theory and new efficient methods
A Beznosikov, E Gorbunov, H Berard, N Loizou
arXiv preprint arXiv:2202.07262, 2022
62022
Optimal algorithms for decentralized stochastic variational inequalities
D Kovalev, A Beznosikov, A Sadiev, M Persiianov, P Richtárik, ...
arXiv preprint arXiv:2202.02771, 2022
52022
Near-optimal decentralized algorithms for saddle point problems over time-varying networks
A Beznosikov, A Rogozin, D Kovalev, A Gasnikov
International Conference on Optimization and Applications, 246-257, 2021
52021
Decentralized Personalized Federated Min-Max Problems
E Borodich, A Beznosikov, A Sadiev, V Sushko, N Savelyev, M Takáč, ...
arXiv preprint arXiv:2106.07289, 2021
52021
The power of first-order smooth optimization for black-box non-smooth problems
BG Alexander Gasnikov, Anton Novitskii, Vasilii Novitskii, Farshed ...
International Conference on Machine Learning, 7241-7265, 2022
3*2022
Optimal Gradient Sliding and its Application to Distributed Optimization Under Similarity
D Kovalev, A Beznosikov, E Borodich, A Gasnikov, G Scutari
arXiv preprint arXiv:2205.15136, 2022
32022
Distributed methods with compressed communication for solving variational inequalities, with theoretical guarantees
A Beznosikov, P Richtárik, M Diskin, M Ryabinin, A Gasnikov
arXiv preprint arXiv:2110.03313, 2021
32021
One-point gradient-free methods for composite optimization with applications to distributed optimization
I Stepanov, A Voronov, A Beznosikov, A Gasnikov
arXiv preprint arXiv:2107.05951, 2021
32021
Decentralized Personalized Federated Learning: Lower Bounds and Optimal Algorithm for All Personalization Modes
A Sadiev, E Borodich, A Beznosikov, D Dvinskikh, S Chezhegov, ...
EURO Journal on Computational Optimization, 100041, 2022
1*2022
On scaled methods for saddle point problems
A Beznosikov, A Alanov, D Kovalev, M Takáč, A Gasnikov
arXiv preprint arXiv:2206.08303, 2022
12022
The system can't perform the operation now. Try again later.
Articles 1–20