Get our free extension to see links to code for papers anywhere online!

 Add to Chrome

 Add to Firefox

CatalyzeX Code Finder - Browser extension linking code for ML papers across the web! | Product Hunt Embed
Bridging Exploration and General Function Approximation in Reinforcement Learning: Provably Efficient Kernel and Neural Value Iterations

Nov 09, 2020
Zhuoran Yang, Chi Jin, Zhaoran Wang, Mengdi Wang, Michael I. Jordan

* 76 pages. The short version of this work appears in NeurIPS 2020 

  Access Paper or Ask Questions

A Sharp Analysis of Model-based Reinforcement Learning with Self-Play

Oct 04, 2020
Qinghua Liu, Tiancheng Yu, Yu Bai, Chi Jin


  Access Paper or Ask Questions

Near-Optimal Reinforcement Learning with Self-Play

Jul 14, 2020
Yu Bai, Chi Jin, Tiancheng Yu


  Access Paper or Ask Questions

Sample-Efficient Reinforcement Learning of Undercomplete POMDPs

Jun 22, 2020
Chi Jin, Sham M. Kakade, Akshay Krishnamurthy, Qinghua Liu


  Access Paper or Ask Questions

On the Theory of Transfer Learning: The Importance of Task Diversity

Jun 20, 2020
Nilesh Tripuraneni, Michael I. Jordan, Chi Jin


  Access Paper or Ask Questions

Provable Meta-Learning of Linear Representations

Feb 26, 2020
Nilesh Tripuraneni, Chi Jin, Michael I. Jordan


  Access Paper or Ask Questions

Provable Self-Play Algorithms for Competitive Reinforcement Learning

Feb 23, 2020
Yu Bai, Chi Jin


  Access Paper or Ask Questions

Reward-Free Exploration for Reinforcement Learning

Feb 07, 2020
Chi Jin, Akshay Krishnamurthy, Max Simchowitz, Tiancheng Yu


  Access Paper or Ask Questions

Near-Optimal Algorithms for Minimax Optimization

Feb 05, 2020
Tianyi Lin, Chi Jin, Michael. I. Jordan

* 40 pages 

  Access Paper or Ask Questions

Learning Adversarial MDPs with Bandit Feedback and Unknown Transition

Jan 07, 2020
Chi Jin, Tiancheng Jin, Haipeng Luo, Suvrit Sra, Tiancheng Yu

* Improved the algorithm with a tighter confidence set 

  Access Paper or Ask Questions

Provably Efficient Exploration in Policy Optimization

Dec 12, 2019
Qi Cai, Zhuoran Yang, Chi Jin, Zhaoran Wang


  Access Paper or Ask Questions

Provably Efficient Reinforcement Learning with Linear Function Approximation

Aug 08, 2019
Chi Jin, Zhuoran Yang, Zhaoran Wang, Michael I. Jordan


  Access Paper or Ask Questions

On Gradient Descent Ascent for Nonconvex-Concave Minimax Problems

Jun 02, 2019
Tianyi Lin, Chi Jin, Michael I. Jordan


  Access Paper or Ask Questions

Stochastic Gradient Descent Escapes Saddle Points Efficiently

Feb 13, 2019
Chi Jin, Praneeth Netrapalli, Rong Ge, Sham M. Kakade, Michael I. Jordan


  Access Paper or Ask Questions

A Short Note on Concentration Inequalities for Random Vectors with SubGaussian Norm

Feb 11, 2019
Chi Jin, Praneeth Netrapalli, Rong Ge, Sham M. Kakade, Michael I. Jordan


  Access Paper or Ask Questions

Minmax Optimization: Stable Limit Points of Gradient Descent Ascent are Locally Optimal

Feb 02, 2019
Chi Jin, Praneeth Netrapalli, Michael I. Jordan


  Access Paper or Ask Questions

Sampling Can Be Faster Than Optimization

Nov 20, 2018
Yi-An Ma, Yuansi Chen, Chi Jin, Nicolas Flammarion, Michael I. Jordan


  Access Paper or Ask Questions

On the Local Minima of the Empirical Risk

Oct 17, 2018
Chi Jin, Lydia T. Liu, Rong Ge, Michael I. Jordan

* To appear in NIPS 2018 

  Access Paper or Ask Questions

Is Q-learning Provably Efficient?

Jul 10, 2018
Chi Jin, Zeyuan Allen-Zhu, Sebastien Bubeck, Michael I. Jordan

* Best paper in ICML 2018 workshop "Exploration in RL" 

  Access Paper or Ask Questions

Stability and Convergence Trade-off of Iterative Optimization Algorithms

Apr 04, 2018
Yuansi Chen, Chi Jin, Bin Yu

* 45 pages, 7 figures 

  Access Paper or Ask Questions

Stochastic Cubic Regularization for Fast Nonconvex Optimization

Dec 05, 2017
Nilesh Tripuraneni, Mitchell Stern, Chi Jin, Jeffrey Regier, Michael I. Jordan

* The first two authors contributed equally 

  Access Paper or Ask Questions

Accelerated Gradient Descent Escapes Saddle Points Faster than Gradient Descent

Nov 28, 2017
Chi Jin, Praneeth Netrapalli, Michael I. Jordan


  Access Paper or Ask Questions

Gradient Descent Can Take Exponential Time to Escape Saddle Points

Nov 05, 2017
Simon S. Du, Chi Jin, Jason D. Lee, Michael I. Jordan, Barnabas Poczos, Aarti Singh

* Accepted by NIPS 2017 

  Access Paper or Ask Questions

No Spurious Local Minima in Nonconvex Low Rank Problems: A Unified Geometric Analysis

Apr 03, 2017
Rong Ge, Chi Jin, Yi Zheng


  Access Paper or Ask Questions

How to Escape Saddle Points Efficiently

Mar 02, 2017
Chi Jin, Rong Ge, Praneeth Netrapalli, Sham M. Kakade, Michael I. Jordan


  Access Paper or Ask Questions

Local Maxima in the Likelihood of Gaussian Mixture Models: Structural Results and Algorithmic Consequences

Sep 04, 2016
Chi Jin, Yuchen Zhang, Sivaraman Balakrishnan, Martin J. Wainwright, Michael Jordan

* Neural Information Processing Systems (NIPS) 2016 

  Access Paper or Ask Questions

Robust Shift-and-Invert Preconditioning: Faster and More Sample Efficient Algorithms for Eigenvector Computation

May 30, 2016
Chi Jin, Sham M. Kakade, Cameron Musco, Praneeth Netrapalli, Aaron Sidford

* Manuscript outdated. Updated version at arxiv:1605.08754 

  Access Paper or Ask Questions