Picture for Dinesh Manocha

Dinesh Manocha

Can LLMs Generate Human-Like Wayfinding Instructions? Towards Platform-Agnostic Embodied Instruction Synthesis

Add code
Mar 31, 2024
Viaarxiv icon

CoDa: Constrained Generation based Data Augmentation for Low-Resource NLP

Add code
Mar 30, 2024
Viaarxiv icon

Socially Aware Robot Navigation through Scoring Using Vision-Language Models

Add code
Mar 30, 2024
Viaarxiv icon

Do Vision-Language Models Understand Compound Nouns?

Add code
Mar 30, 2024
Figure 1 for Do Vision-Language Models Understand Compound Nouns?
Figure 2 for Do Vision-Language Models Understand Compound Nouns?
Figure 3 for Do Vision-Language Models Understand Compound Nouns?
Figure 4 for Do Vision-Language Models Understand Compound Nouns?
Viaarxiv icon

DTG : Diffusion-based Trajectory Generation for Mapless Global Navigation

Add code
Mar 25, 2024
Viaarxiv icon

CoNVOI: Context-aware Navigation using Vision Language Models in Outdoor and Indoor Environments

Add code
Mar 22, 2024
Figure 1 for CoNVOI: Context-aware Navigation using Vision Language Models in Outdoor and Indoor Environments
Figure 2 for CoNVOI: Context-aware Navigation using Vision Language Models in Outdoor and Indoor Environments
Figure 3 for CoNVOI: Context-aware Navigation using Vision Language Models in Outdoor and Indoor Environments
Figure 4 for CoNVOI: Context-aware Navigation using Vision Language Models in Outdoor and Indoor Environments
Viaarxiv icon

AMCO: Adaptive Multimodal Coupling of Vision and Proprioception for Quadruped Robot Navigation in Outdoor Environments

Add code
Mar 20, 2024
Figure 1 for AMCO: Adaptive Multimodal Coupling of Vision and Proprioception for Quadruped Robot Navigation in Outdoor Environments
Figure 2 for AMCO: Adaptive Multimodal Coupling of Vision and Proprioception for Quadruped Robot Navigation in Outdoor Environments
Figure 3 for AMCO: Adaptive Multimodal Coupling of Vision and Proprioception for Quadruped Robot Navigation in Outdoor Environments
Figure 4 for AMCO: Adaptive Multimodal Coupling of Vision and Proprioception for Quadruped Robot Navigation in Outdoor Environments
Viaarxiv icon

Towards Robots That Know When They Need Help: Affordance-Based Uncertainty for Large Language Model Planners

Add code
Mar 19, 2024
Figure 1 for Towards Robots That Know When They Need Help: Affordance-Based Uncertainty for Large Language Model Planners
Figure 2 for Towards Robots That Know When They Need Help: Affordance-Based Uncertainty for Large Language Model Planners
Figure 3 for Towards Robots That Know When They Need Help: Affordance-Based Uncertainty for Large Language Model Planners
Figure 4 for Towards Robots That Know When They Need Help: Affordance-Based Uncertainty for Large Language Model Planners
Viaarxiv icon

Global Optimality without Mixing Time Oracles in Average-reward RL via Multi-level Actor-Critic

Add code
Mar 18, 2024
Viaarxiv icon

Right Place, Right Time! Towards ObjectNav for Non-Stationary Goals

Add code
Mar 14, 2024
Figure 1 for Right Place, Right Time! Towards ObjectNav for Non-Stationary Goals
Figure 2 for Right Place, Right Time! Towards ObjectNav for Non-Stationary Goals
Figure 3 for Right Place, Right Time! Towards ObjectNav for Non-Stationary Goals
Figure 4 for Right Place, Right Time! Towards ObjectNav for Non-Stationary Goals
Viaarxiv icon