Get our free extension to see links to code for papers anywhere online!

Chrome logo  Add to Chrome

Firefox logo Add to Firefox

Learning to Augment for Casual User Recommendation



Jianling Wang , Ya Le , Bo Chang , Yuyan Wang , Ed H. Chi , Minmin Chen

* Accepted by TheWebConf 2022 

   Access Paper or Ask Questions

Recency Dropout for Recurrent Recommender Systems



Bo Chang , Can Xu , Matthieu Lê , Jingchen Feng , Ya Le , Sriraj Badam , Ed Chi , Minmin Chen


   Access Paper or Ask Questions

Towards Content Provider Aware Recommender Systems: A Simulation Study on the Interplay between User and Provider Utilities



Ruohan Zhan , Konstantina Christakopoulou , Ya Le , Jayden Ooi , Martin Mladenov , Alex Beutel , Craig Boutilier , Ed H. Chi , Minmin Chen


   Access Paper or Ask Questions

Quantifying Long Range Dependence in Language and User Behavior to improve RNNs



Francois Belletti , Minmin Chen , Ed H. Chi


   Access Paper or Ask Questions

AntisymmetricRNN: A Dynamical System View on Recurrent Neural Networks



Bo Chang , Minmin Chen , Eldad Haber , Ed H. Chi

* Published as a conference paper at ICLR 2019 

   Access Paper or Ask Questions

Towards Neural Mixture Recommender for Long Range Dependent User Sequences



Jiaxi Tang , Francois Belletti , Sagar Jain , Minmin Chen , Alex Beutel , Can Xu , Ed H. Chi

* Accepted at WWW 2019 

   Access Paper or Ask Questions

Dynamical Isometry and a Mean Field Theory of LSTMs and GRUs



Dar Gilboa , Bo Chang , Minmin Chen , Greg Yang , Samuel S. Schoenholz , Ed H. Chi , Jeffrey Pennington


   Access Paper or Ask Questions

Top-K Off-Policy Correction for a REINFORCE Recommender System



Minmin Chen , Alex Beutel , Paul Covington , Sagar Jain , Francois Belletti , Ed Chi


   Access Paper or Ask Questions

Dynamical Isometry and a Mean Field Theory of RNNs: Gating Enables Signal Propagation in Recurrent Neural Networks



Minmin Chen , Jeffrey Pennington , Samuel S. Schoenholz

* ICML 2018 Conference Proceedings 

   Access Paper or Ask Questions

MinimalRNN: Toward More Interpretable and Trainable Recurrent Neural Networks



Minmin Chen

* Presented at NIPS 2017 Symposium on Interpretable Machine Learning 

   Access Paper or Ask Questions

1
2
>>