Alert button
Picture for Amy Zhang

Amy Zhang

Alert button

Neural Constraint Satisfaction: Hierarchical Abstraction for Combinatorial Generalization in Object Rearrangement

Add code
Bookmark button
Alert button
Mar 20, 2023
Michael Chang, Alyssa L. Dayan, Franziska Meier, Thomas L. Griffiths, Sergey Levine, Amy Zhang

Figure 1 for Neural Constraint Satisfaction: Hierarchical Abstraction for Combinatorial Generalization in Object Rearrangement
Figure 2 for Neural Constraint Satisfaction: Hierarchical Abstraction for Combinatorial Generalization in Object Rearrangement
Figure 3 for Neural Constraint Satisfaction: Hierarchical Abstraction for Combinatorial Generalization in Object Rearrangement
Figure 4 for Neural Constraint Satisfaction: Hierarchical Abstraction for Combinatorial Generalization in Object Rearrangement
Viaarxiv icon

Confidence-aware 3D Gaze Estimation and Evaluation Metric

Add code
Bookmark button
Alert button
Mar 17, 2023
Qiaojie Zheng, Jiucai Zhang, Amy Zhang, Xiaoli Zhang

Figure 1 for Confidence-aware 3D Gaze Estimation and Evaluation Metric
Figure 2 for Confidence-aware 3D Gaze Estimation and Evaluation Metric
Figure 3 for Confidence-aware 3D Gaze Estimation and Evaluation Metric
Figure 4 for Confidence-aware 3D Gaze Estimation and Evaluation Metric
Viaarxiv icon

Imitation from Arbitrary Experience: A Dual Unification of Reinforcement and Imitation Learning Methods

Add code
Bookmark button
Alert button
Feb 16, 2023
Harshit Sikchi, Amy Zhang, Scott Niekum

Figure 1 for Imitation from Arbitrary Experience: A Dual Unification of Reinforcement and Imitation Learning Methods
Figure 2 for Imitation from Arbitrary Experience: A Dual Unification of Reinforcement and Imitation Learning Methods
Figure 3 for Imitation from Arbitrary Experience: A Dual Unification of Reinforcement and Imitation Learning Methods
Figure 4 for Imitation from Arbitrary Experience: A Dual Unification of Reinforcement and Imitation Learning Methods
Viaarxiv icon

Provably Efficient Offline Goal-Conditioned Reinforcement Learning with General Function Approximation and Single-Policy Concentrability

Add code
Bookmark button
Alert button
Feb 07, 2023
Hanlin Zhu, Amy Zhang

Viaarxiv icon

Contrastive Distillation Is a Sample-Efficient Self-Supervised Loss Policy for Transfer Learning

Add code
Bookmark button
Alert button
Dec 21, 2022
Chris Lengerich, Gabriel Synnaeve, Amy Zhang, Hugh Leather, Kurt Shuster, François Charton, Charysse Redwood

Figure 1 for Contrastive Distillation Is a Sample-Efficient Self-Supervised Loss Policy for Transfer Learning
Figure 2 for Contrastive Distillation Is a Sample-Efficient Self-Supervised Loss Policy for Transfer Learning
Viaarxiv icon

LAD: Language Augmented Diffusion for Reinforcement Learning

Add code
Bookmark button
Alert button
Oct 27, 2022
Edwin Zhang, Yujie Lu, William Wang, Amy Zhang

Figure 1 for LAD: Language Augmented Diffusion for Reinforcement Learning
Figure 2 for LAD: Language Augmented Diffusion for Reinforcement Learning
Figure 3 for LAD: Language Augmented Diffusion for Reinforcement Learning
Figure 4 for LAD: Language Augmented Diffusion for Reinforcement Learning
Viaarxiv icon

Latent State Marginalization as a Low-cost Approach for Improving Exploration

Add code
Bookmark button
Alert button
Oct 03, 2022
Dinghuai Zhang, Aaron Courville, Yoshua Bengio, Qinqing Zheng, Amy Zhang, Ricky T. Q. Chen

Figure 1 for Latent State Marginalization as a Low-cost Approach for Improving Exploration
Figure 2 for Latent State Marginalization as a Low-cost Approach for Improving Exploration
Figure 3 for Latent State Marginalization as a Low-cost Approach for Improving Exploration
Figure 4 for Latent State Marginalization as a Low-cost Approach for Improving Exploration
Viaarxiv icon

VIP: Towards Universal Visual Reward and Representation via Value-Implicit Pre-Training

Add code
Bookmark button
Alert button
Sep 30, 2022
Yecheng Jason Ma, Shagun Sodhani, Dinesh Jayaraman, Osbert Bastani, Vikash Kumar, Amy Zhang

Figure 1 for VIP: Towards Universal Visual Reward and Representation via Value-Implicit Pre-Training
Figure 2 for VIP: Towards Universal Visual Reward and Representation via Value-Implicit Pre-Training
Figure 3 for VIP: Towards Universal Visual Reward and Representation via Value-Implicit Pre-Training
Figure 4 for VIP: Towards Universal Visual Reward and Representation via Value-Implicit Pre-Training
Viaarxiv icon

Building Human Values into Recommender Systems: An Interdisciplinary Synthesis

Add code
Bookmark button
Alert button
Jul 20, 2022
Jonathan Stray, Alon Halevy, Parisa Assar, Dylan Hadfield-Menell, Craig Boutilier, Amar Ashar, Lex Beattie, Michael Ekstrand, Claire Leibowicz, Connie Moon Sehat, Sara Johansen, Lianne Kerlin, David Vickrey, Spandana Singh, Sanne Vrijenhoek, Amy Zhang, McKane Andrus, Natali Helberger, Polina Proutskova, Tanushree Mitra, Nina Vasan

Figure 1 for Building Human Values into Recommender Systems: An Interdisciplinary Synthesis
Figure 2 for Building Human Values into Recommender Systems: An Interdisciplinary Synthesis
Figure 3 for Building Human Values into Recommender Systems: An Interdisciplinary Synthesis
Figure 4 for Building Human Values into Recommender Systems: An Interdisciplinary Synthesis
Viaarxiv icon

Denoised MDPs: Learning World Models Better Than the World Itself

Add code
Bookmark button
Alert button
Jul 18, 2022
Tongzhou Wang, Simon S. Du, Antonio Torralba, Phillip Isola, Amy Zhang, Yuandong Tian

Figure 1 for Denoised MDPs: Learning World Models Better Than the World Itself
Figure 2 for Denoised MDPs: Learning World Models Better Than the World Itself
Figure 3 for Denoised MDPs: Learning World Models Better Than the World Itself
Figure 4 for Denoised MDPs: Learning World Models Better Than the World Itself
Viaarxiv icon