Alert button
Picture for Naoaki Kanazawa

Naoaki Kanazawa

Alert button

Learning-Based Wiping Behavior of Low-Rigidity Robots Considering Various Surface Materials and Task Definitions

Add code
Bookmark button
Alert button
Mar 17, 2024
Kento Kawaharazuka, Naoaki Kanazawa, Kei Okada, Masayuki Inaba

Figure 1 for Learning-Based Wiping Behavior of Low-Rigidity Robots Considering Various Surface Materials and Task Definitions
Figure 2 for Learning-Based Wiping Behavior of Low-Rigidity Robots Considering Various Surface Materials and Task Definitions
Figure 3 for Learning-Based Wiping Behavior of Low-Rigidity Robots Considering Various Surface Materials and Task Definitions
Figure 4 for Learning-Based Wiping Behavior of Low-Rigidity Robots Considering Various Surface Materials and Task Definitions
Viaarxiv icon

Continuous Object State Recognition for Cooking Robots Using Pre-Trained Vision-Language Models and Black-box Optimization

Add code
Bookmark button
Alert button
Mar 13, 2024
Kento Kawaharazuka, Naoaki Kanazawa, Yoshiki Obinata, Kei Okada, Masayuki Inaba

Figure 1 for Continuous Object State Recognition for Cooking Robots Using Pre-Trained Vision-Language Models and Black-box Optimization
Figure 2 for Continuous Object State Recognition for Cooking Robots Using Pre-Trained Vision-Language Models and Black-box Optimization
Figure 3 for Continuous Object State Recognition for Cooking Robots Using Pre-Trained Vision-Language Models and Black-box Optimization
Figure 4 for Continuous Object State Recognition for Cooking Robots Using Pre-Trained Vision-Language Models and Black-box Optimization
Viaarxiv icon

Daily Assistive View Control Learning of Low-Cost Low-Rigidity Robot via Large-Scale Vision-Language Model

Add code
Bookmark button
Alert button
Dec 12, 2023
Kento Kawaharazuka, Naoaki Kanazawa, Yoshiki Obinata, Kei Okada, Masayuki Inaba

Viaarxiv icon

Binary State Recognition by Robots using Visual Question Answering of Pre-Trained Vision-Language Model

Add code
Bookmark button
Alert button
Oct 25, 2023
Kento Kawaharazuka, Yoshiki Obinata, Naoaki Kanazawa, Kei Okada, Masayuki Inaba

Viaarxiv icon

Open X-Embodiment: Robotic Learning Datasets and RT-X Models

Add code
Bookmark button
Alert button
Oct 17, 2023
Open X-Embodiment Collaboration, Abhishek Padalkar, Acorn Pooley, Ajinkya Jain, Alex Bewley, Alex Herzog, Alex Irpan, Alexander Khazatsky, Anant Rai, Anikait Singh, Anthony Brohan, Antonin Raffin, Ayzaan Wahid, Ben Burgess-Limerick, Beomjoon Kim, Bernhard Schölkopf, Brian Ichter, Cewu Lu, Charles Xu, Chelsea Finn, Chenfeng Xu, Cheng Chi, Chenguang Huang, Christine Chan, Chuer Pan, Chuyuan Fu, Coline Devin, Danny Driess, Deepak Pathak, Dhruv Shah, Dieter Büchler, Dmitry Kalashnikov, Dorsa Sadigh, Edward Johns, Federico Ceola, Fei Xia, Freek Stulp, Gaoyue Zhou, Gaurav S. Sukhatme, Gautam Salhotra, Ge Yan, Giulio Schiavi, Gregory Kahn, Hao Su, Hao-Shu Fang, Haochen Shi, Heni Ben Amor, Henrik I Christensen, Hiroki Furuta, Homer Walke, Hongjie Fang, Igor Mordatch, Ilija Radosavovic, Isabel Leal, Jacky Liang, Jad Abou-Chakra, Jaehyung Kim, Jan Peters, Jan Schneider, Jasmine Hsu, Jeannette Bohg, Jeffrey Bingham, Jiajun Wu, Jialin Wu, Jianlan Luo, Jiayuan Gu, Jie Tan, Jihoon Oh, Jitendra Malik, Jonathan Tompson, Jonathan Yang, Joseph J. Lim, João Silvério, Junhyek Han, Kanishka Rao, Karl Pertsch, Karol Hausman, Keegan Go, Keerthana Gopalakrishnan, Ken Goldberg, Kendra Byrne, Kenneth Oslund, Kento Kawaharazuka, Kevin Zhang, Krishan Rana, Krishnan Srinivasan, Lawrence Yunliang Chen, Lerrel Pinto, Liam Tan, Lionel Ott, Lisa Lee, Masayoshi Tomizuka, Maximilian Du, Michael Ahn, Mingtong Zhang, Mingyu Ding, Mohan Kumar Srirama, Mohit Sharma, Moo Jin Kim, Naoaki Kanazawa, Nicklas Hansen, Nicolas Heess, Nikhil J Joshi, Niko Suenderhauf, Norman Di Palo, Nur Muhammad Mahi Shafiullah, Oier Mees, Oliver Kroemer, Pannag R Sanketi, Paul Wohlhart, Peng Xu, Pierre Sermanet, Priya Sundaresan, Quan Vuong, Rafael Rafailov, Ran Tian, Ria Doshi, Roberto Martín-Martín, Russell Mendonca, Rutav Shah, Ryan Hoque, Ryan Julian, Samuel Bustamante, Sean Kirmani, Sergey Levine, Sherry Moore, Shikhar Bahl, Shivin Dass, Shubham Sonawani, Shuran Song, Sichun Xu, Siddhant Haldar, Simeon Adebola, Simon Guist, Soroush Nasiriany, Stefan Schaal, Stefan Welker, Stephen Tian, Sudeep Dasari, Suneel Belkhale, Takayuki Osa, Tatsuya Harada, Tatsuya Matsushima, Ted Xiao, Tianhe Yu, Tianli Ding, Todor Davchev, Tony Z. Zhao, Travis Armstrong, Trevor Darrell, Vidhi Jain, Vincent Vanhoucke, Wei Zhan, Wenxuan Zhou, Wolfram Burgard, Xi Chen, Xiaolong Wang, Xinghao Zhu, Xuanlin Li, Yao Lu, Yevgen Chebotar, Yifan Zhou, Yifeng Zhu, Ying Xu, Yixuan Wang, Yonatan Bisk, Yoonyoung Cho, Youngwoon Lee, Yuchen Cui, Yueh-Hua Wu, Yujin Tang, Yuke Zhu, Yunzhu Li, Yusuke Iwasawa, Yutaka Matsuo, Zhuo Xu, Zichen Jeff Cui

Figure 1 for Open X-Embodiment: Robotic Learning Datasets and RT-X Models
Figure 2 for Open X-Embodiment: Robotic Learning Datasets and RT-X Models
Figure 3 for Open X-Embodiment: Robotic Learning Datasets and RT-X Models
Figure 4 for Open X-Embodiment: Robotic Learning Datasets and RT-X Models
Viaarxiv icon

Semantic Scene Difference Detection in Daily Life Patroling by Mobile Robots using Pre-Trained Large-Scale Vision-Language Model

Add code
Bookmark button
Alert button
Sep 28, 2023
Yoshiki Obinata, Kento Kawaharazuka, Naoaki Kanazawa, Naoya Yamaguchi, Naoto Tsukamoto, Iori Yanokura, Shingo Kitagawa, Koki Shinjo, Kei Okada, Masayuki Inaba

Viaarxiv icon

Recognition of Heat-Induced Food State Changes by Time-Series Use of Vision-Language Model for Cooking Robot

Add code
Bookmark button
Alert button
Sep 06, 2023
Naoaki Kanazawa, Kento Kawaharazuka, Yoshiki Obinata, Kei Okada, Masayuki Inaba

Viaarxiv icon

Foundation Model based Open Vocabulary Task Planning and Executive System for General Purpose Service Robots

Add code
Bookmark button
Alert button
Aug 07, 2023
Yoshiki Obinata, Naoaki Kanazawa, Kento Kawaharazuka, Iori Yanokura, Soonhyo Kim, Kei Okada, Masayuki Inaba

Figure 1 for Foundation Model based Open Vocabulary Task Planning and Executive System for General Purpose Service Robots
Figure 2 for Foundation Model based Open Vocabulary Task Planning and Executive System for General Purpose Service Robots
Figure 3 for Foundation Model based Open Vocabulary Task Planning and Executive System for General Purpose Service Robots
Figure 4 for Foundation Model based Open Vocabulary Task Planning and Executive System for General Purpose Service Robots
Viaarxiv icon

Robotic Applications of Pre-Trained Vision-Language Models to Various Recognition Behaviors

Add code
Bookmark button
Alert button
Mar 10, 2023
Kento Kawaharazuka, Yoshiki Obinata, Naoaki Kanazawa, Kei Okada, Masayuki Inaba

Figure 1 for Robotic Applications of Pre-Trained Vision-Language Models to Various Recognition Behaviors
Figure 2 for Robotic Applications of Pre-Trained Vision-Language Models to Various Recognition Behaviors
Figure 3 for Robotic Applications of Pre-Trained Vision-Language Models to Various Recognition Behaviors
Figure 4 for Robotic Applications of Pre-Trained Vision-Language Models to Various Recognition Behaviors
Viaarxiv icon