Alert button
Picture for Xin Eric Wang

Xin Eric Wang

Alert button

Anticipating the Unseen Discrepancy for Vision and Language Navigation

Sep 10, 2022
Yujie Lu, Huiliang Zhang, Ping Nie, Weixi Feng, Wenda Xu, Xin Eric Wang, William Yang Wang

Figure 1 for Anticipating the Unseen Discrepancy for Vision and Language Navigation
Figure 2 for Anticipating the Unseen Discrepancy for Vision and Language Navigation
Figure 3 for Anticipating the Unseen Discrepancy for Vision and Language Navigation
Figure 4 for Anticipating the Unseen Discrepancy for Vision and Language Navigation
Viaarxiv icon

JARVIS: A Neuro-Symbolic Commonsense Reasoning Framework for Conversational Embodied Agents

Aug 30, 2022
Kaizhi Zheng, Kaiwen Zhou, Jing Gu, Yue Fan, Jialu Wang, Zonglin Di, Xuehai He, Xin Eric Wang

Figure 1 for JARVIS: A Neuro-Symbolic Commonsense Reasoning Framework for Conversational Embodied Agents
Figure 2 for JARVIS: A Neuro-Symbolic Commonsense Reasoning Framework for Conversational Embodied Agents
Figure 3 for JARVIS: A Neuro-Symbolic Commonsense Reasoning Framework for Conversational Embodied Agents
Figure 4 for JARVIS: A Neuro-Symbolic Commonsense Reasoning Framework for Conversational Embodied Agents
Viaarxiv icon

Understanding Instance-Level Impact of Fairness Constraints

Jun 30, 2022
Jialu Wang, Xin Eric Wang, Yang Liu

Figure 1 for Understanding Instance-Level Impact of Fairness Constraints
Figure 2 for Understanding Instance-Level Impact of Fairness Constraints
Figure 3 for Understanding Instance-Level Impact of Fairness Constraints
Figure 4 for Understanding Instance-Level Impact of Fairness Constraints
Viaarxiv icon

VLMbench: A Compositional Benchmark for Vision-and-Language Manipulation

Jun 17, 2022
Kaizhi Zheng, Xiaotong Chen, Odest Chadwicke Jenkins, Xin Eric Wang

Figure 1 for VLMbench: A Compositional Benchmark for Vision-and-Language Manipulation
Figure 2 for VLMbench: A Compositional Benchmark for Vision-and-Language Manipulation
Figure 3 for VLMbench: A Compositional Benchmark for Vision-and-Language Manipulation
Figure 4 for VLMbench: A Compositional Benchmark for Vision-and-Language Manipulation
Viaarxiv icon

Neuro-Symbolic Causal Language Planning with Commonsense Prompting

Jun 06, 2022
Yujie Lu, Weixi Feng, Wanrong Zhu, Wenda Xu, Xin Eric Wang, Miguel Eckstein, William Yang Wang

Figure 1 for Neuro-Symbolic Causal Language Planning with Commonsense Prompting
Figure 2 for Neuro-Symbolic Causal Language Planning with Commonsense Prompting
Figure 3 for Neuro-Symbolic Causal Language Planning with Commonsense Prompting
Figure 4 for Neuro-Symbolic Causal Language Planning with Commonsense Prompting
Viaarxiv icon

Aerial Vision-and-Dialog Navigation

May 24, 2022
Yue Fan, Winson Chen, Tongzhou Jiang, Chun Zhou, Yi Zhang, Xin Eric Wang

Figure 1 for Aerial Vision-and-Dialog Navigation
Figure 2 for Aerial Vision-and-Dialog Navigation
Figure 3 for Aerial Vision-and-Dialog Navigation
Figure 4 for Aerial Vision-and-Dialog Navigation
Viaarxiv icon

Imagination-Augmented Natural Language Understanding

Apr 21, 2022
Yujie Lu, Wanrong Zhu, Xin Eric Wang, Miguel Eckstein, William Yang Wang

Figure 1 for Imagination-Augmented Natural Language Understanding
Figure 2 for Imagination-Augmented Natural Language Understanding
Figure 3 for Imagination-Augmented Natural Language Understanding
Figure 4 for Imagination-Augmented Natural Language Understanding
Viaarxiv icon

Vision-and-Language Navigation: A Survey of Tasks, Methods, and Future Directions

Mar 29, 2022
Jing Gu, Eliana Stefani, Qi Wu, Jesse Thomason, Xin Eric Wang

Figure 1 for Vision-and-Language Navigation: A Survey of Tasks, Methods, and Future Directions
Figure 2 for Vision-and-Language Navigation: A Survey of Tasks, Methods, and Future Directions
Figure 3 for Vision-and-Language Navigation: A Survey of Tasks, Methods, and Future Directions
Figure 4 for Vision-and-Language Navigation: A Survey of Tasks, Methods, and Future Directions
Viaarxiv icon

Parameter-efficient Fine-tuning for Vision Transformers

Mar 29, 2022
Xuehai He, Chunyuan Li, Pengchuan Zhang, Jianwei Yang, Xin Eric Wang

Figure 1 for Parameter-efficient Fine-tuning for Vision Transformers
Figure 2 for Parameter-efficient Fine-tuning for Vision Transformers
Figure 3 for Parameter-efficient Fine-tuning for Vision Transformers
Figure 4 for Parameter-efficient Fine-tuning for Vision Transformers
Viaarxiv icon