Prati
Chi Jin
Chi Jin
Assistant Professor, Princeton University
Potvrđena adresa e-pošte na princeton.edu - Početna stranica
Naslov
Citirano
Citirano
Godina
Escaping from saddle points—online stochastic gradient for tensor decomposition
R Ge, F Huang, C Jin, Y Yuan
Conference on learning theory, 797-842, 2015
12652015
How to escape saddle points efficiently
C Jin, R Ge, P Netrapalli, SM Kakade, MI Jordan
International conference on machine learning, 1724-1732, 2017
9952017
Is Q-learning provably efficient?
C Jin, Z Allen-Zhu, S Bubeck, MI Jordan
Advances in neural information processing systems 31, 2018
9712018
Provably efficient reinforcement learning with linear function approximation
C Jin, Z Yang, Z Wang, MI Jordan
Conference on learning theory, 2137-2143, 2020
8242020
On gradient descent ascent for nonconvex-concave minimax problems
T Lin, C Jin, M Jordan
International Conference on Machine Learning, 6083-6093, 2020
5752020
No spurious local minima in nonconvex low rank problems: A unified geometric analysis
R Ge, C Jin, Y Zheng
International Conference on Machine Learning, 1233-1242, 2017
5142017
What is local optimality in nonconvex-nonconcave minimax optimization?
C Jin, P Netrapalli, M Jordan
International conference on machine learning, 4880-4889, 2020
454*2020
Provably efficient exploration in policy optimization
Q Cai, Z Yang, C Jin, Z Wang
International Conference on Machine Learning, 1283-1294, 2020
3062020
Gradient descent can take exponential time to escape saddle points
SS Du, C Jin, JD Lee, MI Jordan, A Singh, B Poczos
Advances in neural information processing systems 30, 2017
2962017
Near-optimal algorithms for minimax optimization
T Lin, C Jin, MI Jordan
Conference on Learning Theory, 2738-2779, 2020
2892020
Accelerated gradient descent escapes saddle points faster than gradient descent
C Jin, P Netrapalli, MI Jordan
Conference On Learning Theory, 1042-1085, 2018
2882018
Reward-free exploration for reinforcement learning
C Jin, A Krishnamurthy, M Simchowitz, T Yu
International Conference on Machine Learning, 4870-4879, 2020
2472020
Bellman eluder dimension: New rich classes of rl problems, and sample-efficient algorithms
C Jin, Q Liu, S Miryoosefi
Advances in neural information processing systems 34, 13406-13418, 2021
2422021
On nonconvex optimization for machine learning: Gradients, stochasticity, and saddle points
C Jin, P Netrapalli, R Ge, SM Kakade, MI Jordan
Journal of the ACM (JACM) 68 (2), 1-29, 2021
239*2021
On the theory of transfer learning: The importance of task diversity
N Tripuraneni, M Jordan, C Jin
Advances in neural information processing systems 33, 7852-7862, 2020
2292020
Sampling can be faster than optimization
YA Ma, Y Chen, C Jin, N Flammarion, MI Jordan
Proceedings of the National Academy of Sciences 116 (42), 20881-20885, 2019
2192019
Provable meta-learning of linear representations
N Tripuraneni, C Jin, M Jordan
International Conference on Machine Learning, 10434-10443, 2021
1982021
Stochastic cubic regularization for fast nonconvex optimization
N Tripuraneni, M Stern, C Jin, J Regier, MI Jordan
Advances in neural information processing systems 31, 2018
1912018
Local maxima in the likelihood of gaussian mixture models: Structural results and algorithmic consequences
C Jin, Y Zhang, S Balakrishnan, MJ Wainwright, MI Jordan
Advances in neural information processing systems 29, 2016
1842016
Provable self-play algorithms for competitive reinforcement learning
Y Bai, C Jin
International conference on machine learning, 551-560, 2020
1802020
Sustav trenutno ne može provesti ovu radnju. Pokušajte ponovo kasnije.
Članci 1–20