Alert button
Picture for Fei Xia

Fei Xia

Alert button

Learning Model Predictive Controllers with Real-Time Attention for Real-World Navigation

Sep 24, 2022
Xuesu Xiao, Tingnan Zhang, Krzysztof Choromanski, Edward Lee, Anthony Francis, Jake Varley, Stephen Tu, Sumeet Singh, Peng Xu, Fei Xia, Sven Mikael Persson, Dmitry Kalashnikov, Leila Takayama, Roy Frostig, Jie Tan, Carolina Parada, Vikas Sindhwani

Figure 1 for Learning Model Predictive Controllers with Real-Time Attention for Real-World Navigation
Figure 2 for Learning Model Predictive Controllers with Real-Time Attention for Real-World Navigation
Figure 3 for Learning Model Predictive Controllers with Real-Time Attention for Real-World Navigation
Figure 4 for Learning Model Predictive Controllers with Real-Time Attention for Real-World Navigation
Viaarxiv icon

Open-vocabulary Queryable Scene Representations for Real World Planning

Sep 20, 2022
Boyuan Chen, Fei Xia, Brian Ichter, Kanishka Rao, Keerthana Gopalakrishnan, Michael S. Ryoo, Austin Stone, Daniel Kappler

Figure 1 for Open-vocabulary Queryable Scene Representations for Real World Planning
Figure 2 for Open-vocabulary Queryable Scene Representations for Real World Planning
Figure 3 for Open-vocabulary Queryable Scene Representations for Real World Planning
Figure 4 for Open-vocabulary Queryable Scene Representations for Real World Planning
Viaarxiv icon

Code as Policies: Language Model Programs for Embodied Control

Sep 19, 2022
Jacky Liang, Wenlong Huang, Fei Xia, Peng Xu, Karol Hausman, Brian Ichter, Pete Florence, Andy Zeng

Figure 1 for Code as Policies: Language Model Programs for Embodied Control
Figure 2 for Code as Policies: Language Model Programs for Embodied Control
Figure 3 for Code as Policies: Language Model Programs for Embodied Control
Figure 4 for Code as Policies: Language Model Programs for Embodied Control
Viaarxiv icon

6D Camera Relocalization in Visually Ambiguous Extreme Environments

Jul 13, 2022
Yang Zheng, Tolga Birdal, Fei Xia, Yanchao Yang, Yueqi Duan, Leonidas J. Guibas

Figure 1 for 6D Camera Relocalization in Visually Ambiguous Extreme Environments
Figure 2 for 6D Camera Relocalization in Visually Ambiguous Extreme Environments
Figure 3 for 6D Camera Relocalization in Visually Ambiguous Extreme Environments
Figure 4 for 6D Camera Relocalization in Visually Ambiguous Extreme Environments
Viaarxiv icon

Inner Monologue: Embodied Reasoning through Planning with Language Models

Jul 12, 2022
Wenlong Huang, Fei Xia, Ted Xiao, Harris Chan, Jacky Liang, Pete Florence, Andy Zeng, Jonathan Tompson, Igor Mordatch, Yevgen Chebotar, Pierre Sermanet, Noah Brown, Tomas Jackson, Linda Luu, Sergey Levine, Karol Hausman, Brian Ichter

Figure 1 for Inner Monologue: Embodied Reasoning through Planning with Language Models
Figure 2 for Inner Monologue: Embodied Reasoning through Planning with Language Models
Figure 3 for Inner Monologue: Embodied Reasoning through Planning with Language Models
Figure 4 for Inner Monologue: Embodied Reasoning through Planning with Language Models
Viaarxiv icon

BEHAVIOR in Habitat 2.0: Simulator-Independent Logical Task Description for Benchmarking Embodied AI Agents

Jun 13, 2022
Ziang Liu, Roberto Martín-Martín, Fei Xia, Jiajun Wu, Li Fei-Fei

Figure 1 for BEHAVIOR in Habitat 2.0: Simulator-Independent Logical Task Description for Benchmarking Embodied AI Agents
Figure 2 for BEHAVIOR in Habitat 2.0: Simulator-Independent Logical Task Description for Benchmarking Embodied AI Agents
Viaarxiv icon

Physics-based neural network for non-invasive control of coherent light in scattering media

Jun 01, 2022
Alexandra d'Arco, Fei Xia, Antoine Boniface, Jonathan Dong, Sylvain Gigan

Figure 1 for Physics-based neural network for non-invasive control of coherent light in scattering media
Figure 2 for Physics-based neural network for non-invasive control of coherent light in scattering media
Figure 3 for Physics-based neural network for non-invasive control of coherent light in scattering media
Figure 4 for Physics-based neural network for non-invasive control of coherent light in scattering media
Viaarxiv icon

LingYi: Medical Conversational Question Answering System based on Multi-modal Knowledge Graphs

Apr 20, 2022
Fei Xia, Bin Li, Yixuan Weng, Shizhu He, Kang Liu, Bin Sun, Shutao Li, Jun Zhao

Figure 1 for LingYi: Medical Conversational Question Answering System based on Multi-modal Knowledge Graphs
Figure 2 for LingYi: Medical Conversational Question Answering System based on Multi-modal Knowledge Graphs
Figure 3 for LingYi: Medical Conversational Question Answering System based on Multi-modal Knowledge Graphs
Figure 4 for LingYi: Medical Conversational Question Answering System based on Multi-modal Knowledge Graphs
Viaarxiv icon

Towards Better Chinese-centric Neural Machine Translation for Low-resource Languages

Apr 09, 2022
Bin Li, Yixuan Weng, Fei Xia, Hanjun Deng

Figure 1 for Towards Better Chinese-centric Neural Machine Translation for Low-resource Languages
Figure 2 for Towards Better Chinese-centric Neural Machine Translation for Low-resource Languages
Figure 3 for Towards Better Chinese-centric Neural Machine Translation for Low-resource Languages
Figure 4 for Towards Better Chinese-centric Neural Machine Translation for Low-resource Languages
Viaarxiv icon

Do As I Can, Not As I Say: Grounding Language in Robotic Affordances

Apr 04, 2022
Michael Ahn, Anthony Brohan, Noah Brown, Yevgen Chebotar, Omar Cortes, Byron David, Chelsea Finn, Keerthana Gopalakrishnan, Karol Hausman, Alex Herzog, Daniel Ho, Jasmine Hsu, Julian Ibarz, Brian Ichter, Alex Irpan, Eric Jang, Rosario Jauregui Ruano, Kyle Jeffrey, Sally Jesmonth, Nikhil J Joshi, Ryan Julian, Dmitry Kalashnikov, Yuheng Kuang, Kuang-Huei Lee, Sergey Levine, Yao Lu, Linda Luu, Carolina Parada, Peter Pastor, Jornell Quiambao, Kanishka Rao, Jarek Rettinghouse, Diego Reyes, Pierre Sermanet, Nicolas Sievers, Clayton Tan, Alexander Toshev, Vincent Vanhoucke, Fei Xia, Ted Xiao, Peng Xu, Sichun Xu, Mengyuan Yan

Figure 1 for Do As I Can, Not As I Say: Grounding Language in Robotic Affordances
Figure 2 for Do As I Can, Not As I Say: Grounding Language in Robotic Affordances
Figure 3 for Do As I Can, Not As I Say: Grounding Language in Robotic Affordances
Figure 4 for Do As I Can, Not As I Say: Grounding Language in Robotic Affordances
Viaarxiv icon