Alert button
Picture for Jun Miura

Jun Miura

Alert button

Natural Language as Polices: Reasoning for Coordinate-Level Embodied Control with LLMs

Add code
Bookmark button
Alert button
Mar 20, 2024
Yusuke Mikami, Andrew Melnik, Jun Miura, Ville Hautamäki

Figure 1 for Natural Language as Polices: Reasoning for Coordinate-Level Embodied Control with LLMs
Figure 2 for Natural Language as Polices: Reasoning for Coordinate-Level Embodied Control with LLMs
Figure 3 for Natural Language as Polices: Reasoning for Coordinate-Level Embodied Control with LLMs
Figure 4 for Natural Language as Polices: Reasoning for Coordinate-Level Embodied Control with LLMs
Viaarxiv icon

DeepIPCv2: LiDAR-powered Robust Environmental Perception and Navigational Control for Autonomous Vehicle

Add code
Bookmark button
Alert button
Jul 31, 2023
Oskar Natan, Jun Miura

Figure 1 for DeepIPCv2: LiDAR-powered Robust Environmental Perception and Navigational Control for Autonomous Vehicle
Figure 2 for DeepIPCv2: LiDAR-powered Robust Environmental Perception and Navigational Control for Autonomous Vehicle
Figure 3 for DeepIPCv2: LiDAR-powered Robust Environmental Perception and Navigational Control for Autonomous Vehicle
Figure 4 for DeepIPCv2: LiDAR-powered Robust Environmental Perception and Navigational Control for Autonomous Vehicle
Viaarxiv icon

Multi-Source Soft Pseudo-Label Learning with Domain Similarity-based Weighting for Semantic Segmentation

Add code
Bookmark button
Alert button
Mar 02, 2023
Shigemichi Matsuzaki, Hiroaki Masuzawa, Jun Miura

Figure 1 for Multi-Source Soft Pseudo-Label Learning with Domain Similarity-based Weighting for Semantic Segmentation
Figure 2 for Multi-Source Soft Pseudo-Label Learning with Domain Similarity-based Weighting for Semantic Segmentation
Figure 3 for Multi-Source Soft Pseudo-Label Learning with Domain Similarity-based Weighting for Semantic Segmentation
Figure 4 for Multi-Source Soft Pseudo-Label Learning with Domain Similarity-based Weighting for Semantic Segmentation
Viaarxiv icon

Online Refinement of a Scene Recognition Model for Mobile Robots by Observing Human's Interaction with Environments

Add code
Bookmark button
Alert button
Aug 13, 2022
Shigemichi Matsuzaki, Hiroaki Masuzawa, Jun Miura

Figure 1 for Online Refinement of a Scene Recognition Model for Mobile Robots by Observing Human's Interaction with Environments
Figure 2 for Online Refinement of a Scene Recognition Model for Mobile Robots by Observing Human's Interaction with Environments
Figure 3 for Online Refinement of a Scene Recognition Model for Mobile Robots by Observing Human's Interaction with Environments
Figure 4 for Online Refinement of a Scene Recognition Model for Mobile Robots by Observing Human's Interaction with Environments
Viaarxiv icon

DeepIPC: Deeply Integrated Perception and Control for Mobile Robot in Real Environments

Add code
Bookmark button
Alert button
Aug 02, 2022
Oskar Natan, Jun Miura

Figure 1 for DeepIPC: Deeply Integrated Perception and Control for Mobile Robot in Real Environments
Figure 2 for DeepIPC: Deeply Integrated Perception and Control for Mobile Robot in Real Environments
Figure 3 for DeepIPC: Deeply Integrated Perception and Control for Mobile Robot in Real Environments
Figure 4 for DeepIPC: Deeply Integrated Perception and Control for Mobile Robot in Real Environments
Viaarxiv icon

Fully End-to-end Autonomous Driving with Semantic Depth Cloud Mapping and Multi-Agent

Add code
Bookmark button
Alert button
Apr 12, 2022
Oskar Natan, Jun Miura

Figure 1 for Fully End-to-end Autonomous Driving with Semantic Depth Cloud Mapping and Multi-Agent
Figure 2 for Fully End-to-end Autonomous Driving with Semantic Depth Cloud Mapping and Multi-Agent
Figure 3 for Fully End-to-end Autonomous Driving with Semantic Depth Cloud Mapping and Multi-Agent
Figure 4 for Fully End-to-end Autonomous Driving with Semantic Depth Cloud Mapping and Multi-Agent
Viaarxiv icon

Semantic-aware plant traversability estimation in plant-rich environments for agricultural mobile robots

Add code
Bookmark button
Alert button
Aug 02, 2021
Shigemichi Matsuzaki, Jun Miura, Hiroaki Masuzawa

Figure 1 for Semantic-aware plant traversability estimation in plant-rich environments for agricultural mobile robots
Figure 2 for Semantic-aware plant traversability estimation in plant-rich environments for agricultural mobile robots
Figure 3 for Semantic-aware plant traversability estimation in plant-rich environments for agricultural mobile robots
Figure 4 for Semantic-aware plant traversability estimation in plant-rich environments for agricultural mobile robots
Viaarxiv icon

Multi-task Learning with Attention for End-to-end Autonomous Driving

Add code
Bookmark button
Alert button
Apr 21, 2021
Keishi Ishihara, Anssi Kanervisto, Jun Miura, Ville Hautamäki

Figure 1 for Multi-task Learning with Attention for End-to-end Autonomous Driving
Figure 2 for Multi-task Learning with Attention for End-to-end Autonomous Driving
Figure 3 for Multi-task Learning with Attention for End-to-end Autonomous Driving
Figure 4 for Multi-task Learning with Attention for End-to-end Autonomous Driving
Viaarxiv icon