Prati
Tianhao Wang
Tianhao Wang
Research Assistant Professor, Toyota Technological Institute at Chicago
Potvrđena adresa e-pošte na ttic.edu - Početna stranica
Naslov
Citirano
Citirano
Godina
What Happens after SGD Reaches Zero Loss?--A Mathematical Framework
Z Li, T Wang, S Arora
arXiv preprint arXiv:2110.06914, 2021
1092021
Universality of approximate message passing algorithms and tensor networks
T Wang, X Zhong, Z Fan
The Annals of Applied Probability 34 (4), 3943-3994, 2024
522024
Provably efficient reinforcement learning with linear function approximation under adaptivity constraints
T Wang, D Zhou, Q Gu
Advances in Neural Information Processing Systems 34, 13524-13536, 2021
492021
Variance-aware off-policy evaluation with linear function approximation
Y Min, T Wang, D Zhou, Q Gu
Advances in neural information processing systems 34, 7598-7610, 2021
392021
Learn to match with no regret: Reinforcement learning in markov matching markets
Y Min, T Wang, R Xu, Z Wang, M Jordan, Z Yang
Advances in Neural Information Processing Systems 35, 19956-19970, 2022
342022
Training dynamics of multi-head softmax attention for in-context learning: Emergence, convergence, and optimality
S Chen, H Sheen, T Wang, Z Yang
arXiv preprint arXiv:2402.19442, 2024
332024
A simple and provably efficient algorithm for asynchronous federated contextual linear bandits
J He, T Wang, Y Min, Q Gu
Advances in neural information processing systems 35, 4762-4775, 2022
332022
Learning stochastic shortest path with linear function approximation
Y Min, J He, T Wang, Q Gu
International Conference on Machine Learning, 15584-15629, 2022
332022
Approximate message passing for orthogonally invariant ensembles: Multivariate non-linearities and spectral initialization
X Zhong, T Wang, Z Fan
Information and Inference: A Journal of the IMA 13 (3), iaae024, 2024
302024
Implicit bias of gradient descent on reparametrized models: On equivalence to mirror descent
Z Li, T Wang, JD Lee, S Arora
Advances in Neural Information Processing Systems 35, 34626-34640, 2022
282022
Accelerated stochastic mirror descent: From continuous-time dynamics to discrete-time algorithms
P Xu, T Wang, Q Gu
International Conference on Artificial Intelligence and Statistics, 1087-1096, 2018
232018
Continuous and discrete-time accelerated stochastic mirror descent for strongly convex functions
P Xu, T Wang, Q Gu
International Conference on Machine Learning, 5492-5501, 2018
222018
Likelihood landscape and maximum likelihood estimation for the discrete orbit recovery model
Z Fan, Y Sun, T Wang, Y Wu
Communications on Pure and Applied Mathematics 76 (6), 1208-1302, 2023
212023
Fast Mixing of Stochastic Gradient Descent with Normalization and Weight Decay
Z Li, T Wang, D Yu
Advances in Neural Information Processing Systems 35, 9233-9248, 2022
182022
Maximum likelihood for high-noise group orbit estimation and single-particle cryo-EM
Z Fan, RR Lederman, Y Sun, T Wang, S Xu
The Annals of Statistics 52 (1), 52-77, 2024
152024
How Well Can Transformers Emulate In-context Newton's Method?
A Giannou, L Yang, T Wang, D Papailiopoulos, JD Lee
arXiv preprint arXiv:2403.03183, 2024
92024
Cooperative multi-agent reinforcement learning: Asynchronous communication and linear function approximation
Y Min, J He, T Wang, Q Gu
International Conference on Machine Learning, 24785-24811, 2023
92023
North American biliary stricture management strategies in children after liver transplantation: a multicenter analysis from the society of pediatric liver transplantation …
PL Valentino, T Wang, V Shabanova, VL Ng, JC Bucuvalas, AG Feldman, ...
Liver Transplantation 28 (5), 819-833, 2022
92022
The marginal value of momentum for small learning rate sgd
R Wang, S Malladi, T Wang, K Lyu, Z Li
arXiv preprint arXiv:2307.15196, 2023
82023
Implicit Regularization of Gradient Flow on One-Layer Softmax Attention
H Sheen, S Chen, T Wang, HH Zhou
arXiv preprint arXiv:2403.08699, 2024
72024
Sustav trenutno ne može provesti ovu radnju. Pokušajte ponovo kasnije.
Članci 1–20