Get our free extension to see links to code for papers anywhere online!

Chrome logo Add to Chrome

Firefox logo Add to Firefox

Picture for Honglak Lee

Honglak Lee

University of Michigan, Ann Arbor

Adversarial Environment Generation for Learning to Navigate the Web


Mar 02, 2021
Izzeddin Gur, Natasha Jaques, Kevin Malta, Manoj Tiwari, Honglak Lee, Aleksandra Faust

* Presented at Deep RL Workshop, NeurIPS, 2020 

  Access Paper or Ask Questions

State Entropy Maximization with Random Encoders for Efficient Exploration


Feb 18, 2021
Younggyo Seo, Lili Chen, Jinwoo Shin, Honglak Lee, Pieter Abbeel, Kimin Lee

* First two authors contributed equally, website: https://sites.google.com/view/re3-rl 

  Access Paper or Ask Questions

Cross-Modal Contrastive Learning for Text-to-Image Generation


Jan 15, 2021
Han Zhang, Jing Yu Koh, Jason Baldridge, Honglak Lee, Yinfei Yang


  Access Paper or Ask Questions

Evolving Reinforcement Learning Algorithms


Jan 08, 2021
John D. Co-Reyes, Yingjie Miao, Daiyi Peng, Esteban Real, Sergey Levine, Quoc V. Le, Honglak Lee, Aleksandra Faust


  Access Paper or Ask Questions

Few-shot Sequence Learning with Transformers


Dec 17, 2020
Lajanugen Logeswaran, Ann Lee, Myle Ott, Honglak Lee, Marc'Aurelio Ranzato, Arthur Szlam

* NeurIPS Meta-Learning Workshop 2020 

  Access Paper or Ask Questions

Text-to-Image Generation Grounded by Fine-Grained User Attention


Nov 07, 2020
Jing Yu Koh, Jason Baldridge, Honglak Lee, Yinfei Yang

* To appear in WACV 2021 

  Access Paper or Ask Questions

What's in a Loss Function for Image Classification?


Oct 30, 2020
Simon Kornblith, Honglak Lee, Ting Chen, Mohammad Norouzi


  Access Paper or Ask Questions

Reinforcement Learning for Sparse-Reward Object-Interaction Tasks in First-person Simulated 3D Environments


Oct 28, 2020
Wilka Carvalho, Anthony Liang, Kimin Lee, Sungryull Sohn, Honglak Lee, Richard L. Lewis, Satinder Singh


  Access Paper or Ask Questions

Bridging Imagination and Reality for Model-Based Deep Reinforcement Learning


Oct 23, 2020
Guangxiang Zhu, Minghao Zhang, Honglak Lee, Chongjie Zhang

* Published on 34th Conference on Neural Information Processing Systems (NeurIPS 2020) 

  Access Paper or Ask Questions

i-Mix: A Strategy for Regularizing Contrastive Representation Learning


Oct 17, 2020
Kibok Lee, Yian Zhu, Kihyuk Sohn, Chun-Liang Li, Jinwoo Shin, Honglak Lee


  Access Paper or Ask Questions

Text as Neural Operator: Image Manipulation by Text Instruction


Aug 12, 2020
Tianhao Zhang, Hung-Yu Tseng, Lu Jiang, Honglak Lee, Irfan Essa, Weilong Yang


  Access Paper or Ask Questions

Predictive Information Accelerates Learning in RL


Jul 24, 2020
Kuang-Huei Lee, Ian Fischer, Anthony Liu, Yijie Guo, Honglak Lee, John Canny, Sergio Guadarrama


  Access Paper or Ask Questions

Understanding and Diagnosing Vulnerability under Adversarial Attacks


Jul 17, 2020
Haizhong Zheng, Ziqi Zhang, Honglak Lee, Atul Prakash


  Access Paper or Ask Questions

An Ode to an ODE


Jun 23, 2020
Krzysztof Choromanski, Jared Quincy Davis, Valerii Likhosherstov, Xingyou Song, Jean-Jacques Slotine, Jacob Varley, Honglak Lee, Adrian Weller, Vikas Sindhwani

* 20 pages, 9 figures 

  Access Paper or Ask Questions

CompressNet: Generative Compression at Extremely Low Bitrates


Jun 14, 2020
Suraj Kiran Raman, Aditya Ramesh, Vijayakrishna Naganoor, Shubham Dash, Giridharan Kumaravelu, Honglak Lee


  Access Paper or Ask Questions

Context-aware Dynamics Model for Generalization in Model-Based Reinforcement Learning


May 14, 2020
Kimin Lee, Younggyo Seo, Seunghyun Lee, Honglak Lee, Jinwoo Shin

* First two authors contributed equally, website: https://sites.google.com/view/cadm code: https://github.com/younggyoseo/CaDM 

  Access Paper or Ask Questions

Time Dependence in Non-Autonomous Neural ODEs


May 06, 2020
Jared Quincy Davis, Krzysztof Choromanski, Jake Varley, Honglak Lee, Jean-Jacques Slotine, Valerii Likhosterov, Adrian Weller, Ameesh Makadia, Vikas Sindhwani


  Access Paper or Ask Questions

Improved Consistency Regularization for GANs


Feb 11, 2020
Zhengli Zhao, Sameer Singh, Honglak Lee, Zizhao Zhang, Augustus Odena, Han Zhang

* Augustus Odena and Han Zhang contributed equally 

  Access Paper or Ask Questions

BRPO: Batch Residual Policy Optimization


Feb 08, 2020
Sungryull Sohn, Yinlam Chow, Jayden Ooi, Ofir Nachum, Honglak Lee, Ed Chi, Craig Boutilier


  Access Paper or Ask Questions

High-Fidelity Synthesis with Disentangled Representation


Jan 13, 2020
Wonkwang Lee, Donggyun Kim, Seunghoon Hong, Honglak Lee


  Access Paper or Ask Questions

Meta Reinforcement Learning with Autonomous Inference of Subtask Dependencies


Jan 01, 2020
Sungryull Sohn, Hyunjae Woo, Jongwook Choi, Honglak Lee

* In ICLR 2020 

  Access Paper or Ask Questions

Efficient Adversarial Training with Transferable Adversarial Examples


Dec 27, 2019
Haizhong Zheng, Ziqi Zhang, Juncheng Gu, Honglak Lee, Atul Prakash


  Access Paper or Ask Questions

How Should an Agent Practice?


Dec 15, 2019
Janarthanan Rajendran, Richard Lewis, Vivek Veeriah, Honglak Lee, Satinder Singh

* AAAI-2020 

  Access Paper or Ask Questions

High Fidelity Video Prediction with Large Stochastic Recurrent Neural Networks


Nov 05, 2019
Ruben Villegas, Arkanath Pathak, Harini Kannan, Dumitru Erhan, Quoc V. Le, Honglak Lee

* In Advances in Neural Information Processing Systems (NeurIPS), 2019 

  Access Paper or Ask Questions

Consistency Regularization for Generative Adversarial Networks


Oct 26, 2019
Han Zhang, Zizhao Zhang, Augustus Odena, Honglak Lee


  Access Paper or Ask Questions

IEG: Robust Neural Network Training to Tackle Severe Label Noise


Oct 13, 2019
Zizhao Zhang, Han Zhang, Sercan O. Arik, Honglak Lee, Tomas Pfister

* v1: first committed preprint, v2: remove small typos in text and figures 

  Access Paper or Ask Questions

A Simple Randomization Technique for Generalization in Deep Reinforcement Learning


Oct 11, 2019
Kimin Lee, Kibok Lee, Jinwoo Shin, Honglak Lee

* In NeurIPS Workshop on Deep RL, 2019 / First two authors are equally contributed 

  Access Paper or Ask Questions