Alert button
Picture for Michelle A. Lee

Michelle A. Lee

Alert button

MultiBench: Multiscale Benchmarks for Multimodal Representation Learning

Add code
Bookmark button
Alert button
Jul 15, 2021
Paul Pu Liang, Yiwei Lyu, Xiang Fan, Zetian Wu, Yun Cheng, Jason Wu, Leslie Chen, Peter Wu, Michelle A. Lee, Yuke Zhu, Ruslan Salakhutdinov, Louis-Philippe Morency

Figure 1 for MultiBench: Multiscale Benchmarks for Multimodal Representation Learning
Figure 2 for MultiBench: Multiscale Benchmarks for Multimodal Representation Learning
Viaarxiv icon

Differentiable Factor Graph Optimization for Learning Smoothers

Add code
Bookmark button
Alert button
May 20, 2021
Brent Yi, Michelle A. Lee, Alina Kloss, Roberto Martín-Martín, Jeannette Bohg

Figure 1 for Differentiable Factor Graph Optimization for Learning Smoothers
Figure 2 for Differentiable Factor Graph Optimization for Learning Smoothers
Figure 3 for Differentiable Factor Graph Optimization for Learning Smoothers
Figure 4 for Differentiable Factor Graph Optimization for Learning Smoothers
Viaarxiv icon

Interpreting Contact Interactions to Overcome Failure in Robot Assembly Tasks

Add code
Bookmark button
Alert button
Jan 07, 2021
Peter A. Zachares, Michelle A. Lee, Wenzhao Lian, Jeannette Bohg

Figure 1 for Interpreting Contact Interactions to Overcome Failure in Robot Assembly Tasks
Figure 2 for Interpreting Contact Interactions to Overcome Failure in Robot Assembly Tasks
Figure 3 for Interpreting Contact Interactions to Overcome Failure in Robot Assembly Tasks
Figure 4 for Interpreting Contact Interactions to Overcome Failure in Robot Assembly Tasks
Viaarxiv icon

Detect, Reject, Correct: Crossmodal Compensation of Corrupted Sensors

Add code
Bookmark button
Alert button
Dec 01, 2020
Michelle A. Lee, Matthew Tan, Yuke Zhu, Jeannette Bohg

Figure 1 for Detect, Reject, Correct: Crossmodal Compensation of Corrupted Sensors
Figure 2 for Detect, Reject, Correct: Crossmodal Compensation of Corrupted Sensors
Figure 3 for Detect, Reject, Correct: Crossmodal Compensation of Corrupted Sensors
Figure 4 for Detect, Reject, Correct: Crossmodal Compensation of Corrupted Sensors
Viaarxiv icon

Multimodal Sensor Fusion with Differentiable Filters

Add code
Bookmark button
Alert button
Oct 25, 2020
Michelle A. Lee, Brent Yi, Roberto Martín-Martín, Silvio Savarese, Jeannette Bohg

Figure 1 for Multimodal Sensor Fusion with Differentiable Filters
Figure 2 for Multimodal Sensor Fusion with Differentiable Filters
Figure 3 for Multimodal Sensor Fusion with Differentiable Filters
Figure 4 for Multimodal Sensor Fusion with Differentiable Filters
Viaarxiv icon

Guided Uncertainty-Aware Policy Optimization: Combining Learning and Model-Based Strategies for Sample-Efficient Policy Learning

Add code
Bookmark button
Alert button
May 26, 2020
Michelle A. Lee, Carlos Florensa, Jonathan Tremblay, Nathan Ratliff, Animesh Garg, Fabio Ramos, Dieter Fox

Figure 1 for Guided Uncertainty-Aware Policy Optimization: Combining Learning and Model-Based Strategies for Sample-Efficient Policy Learning
Figure 2 for Guided Uncertainty-Aware Policy Optimization: Combining Learning and Model-Based Strategies for Sample-Efficient Policy Learning
Figure 3 for Guided Uncertainty-Aware Policy Optimization: Combining Learning and Model-Based Strategies for Sample-Efficient Policy Learning
Viaarxiv icon

Variable Impedance Control in End-Effector Space: An Action Space for Reinforcement Learning in Contact-Rich Tasks

Add code
Bookmark button
Alert button
Aug 02, 2019
Roberto Martín-Martín, Michelle A. Lee, Rachel Gardner, Silvio Savarese, Jeannette Bohg, Animesh Garg

Figure 1 for Variable Impedance Control in End-Effector Space: An Action Space for Reinforcement Learning in Contact-Rich Tasks
Figure 2 for Variable Impedance Control in End-Effector Space: An Action Space for Reinforcement Learning in Contact-Rich Tasks
Figure 3 for Variable Impedance Control in End-Effector Space: An Action Space for Reinforcement Learning in Contact-Rich Tasks
Figure 4 for Variable Impedance Control in End-Effector Space: An Action Space for Reinforcement Learning in Contact-Rich Tasks
Viaarxiv icon

Making Sense of Vision and Touch: Learning Multimodal Representations for Contact-Rich Tasks

Add code
Bookmark button
Alert button
Jul 28, 2019
Michelle A. Lee, Yuke Zhu, Peter Zachares, Matthew Tan, Krishnan Srinivasan, Silvio Savarese, Li Fei-Fei, Animesh Garg, Jeannette Bohg

Figure 1 for Making Sense of Vision and Touch: Learning Multimodal Representations for Contact-Rich Tasks
Figure 2 for Making Sense of Vision and Touch: Learning Multimodal Representations for Contact-Rich Tasks
Figure 3 for Making Sense of Vision and Touch: Learning Multimodal Representations for Contact-Rich Tasks
Figure 4 for Making Sense of Vision and Touch: Learning Multimodal Representations for Contact-Rich Tasks
Viaarxiv icon

Making Sense of Vision and Touch: Self-Supervised Learning of Multimodal Representations for Contact-Rich Tasks

Add code
Bookmark button
Alert button
Mar 08, 2019
Michelle A. Lee, Yuke Zhu, Krishnan Srinivasan, Parth Shah, Silvio Savarese, Li Fei-Fei, Animesh Garg, Jeannette Bohg

Figure 1 for Making Sense of Vision and Touch: Self-Supervised Learning of Multimodal Representations for Contact-Rich Tasks
Figure 2 for Making Sense of Vision and Touch: Self-Supervised Learning of Multimodal Representations for Contact-Rich Tasks
Figure 3 for Making Sense of Vision and Touch: Self-Supervised Learning of Multimodal Representations for Contact-Rich Tasks
Figure 4 for Making Sense of Vision and Touch: Self-Supervised Learning of Multimodal Representations for Contact-Rich Tasks
Viaarxiv icon