Picture for Daniel Tanneberg

Daniel Tanneberg

MERGE: Guided Vision-Language Models for Multi-Actor Event Reasoning and Grounding in Human-Robot Interaction

Add code
Mar 19, 2026
Viaarxiv icon

Local Pairwise Distance Matching for Backpropagation-Free Reinforcement Learning

Add code
Jul 15, 2025
Viaarxiv icon

Neuro-Symbolic Imitation Learning: Discovering Symbolic Abstractions for Skill Learning

Add code
Mar 27, 2025
Viaarxiv icon

Tulip Agent -- Enabling LLM-Based Agents to Solve Tasks Using Large Tool Libraries

Add code
Jul 31, 2024
Figure 1 for Tulip Agent -- Enabling LLM-Based Agents to Solve Tasks Using Large Tool Libraries
Figure 2 for Tulip Agent -- Enabling LLM-Based Agents to Solve Tasks Using Large Tool Libraries
Figure 3 for Tulip Agent -- Enabling LLM-Based Agents to Solve Tasks Using Large Tool Libraries
Figure 4 for Tulip Agent -- Enabling LLM-Based Agents to Solve Tasks Using Large Tool Libraries
Viaarxiv icon

Efficient Symbolic Planning with Views

Add code
May 06, 2024
Figure 1 for Efficient Symbolic Planning with Views
Figure 2 for Efficient Symbolic Planning with Views
Figure 3 for Efficient Symbolic Planning with Views
Figure 4 for Efficient Symbolic Planning with Views
Viaarxiv icon

To Help or Not to Help: LLM-based Attentive Support for Human-Robot Group Interactions

Add code
Mar 19, 2024
Viaarxiv icon

Large Language Models for Multi-Modal Human-Robot Interaction

Add code
Jan 26, 2024
Figure 1 for Large Language Models for Multi-Modal Human-Robot Interaction
Figure 2 for Large Language Models for Multi-Modal Human-Robot Interaction
Figure 3 for Large Language Models for Multi-Modal Human-Robot Interaction
Figure 4 for Large Language Models for Multi-Modal Human-Robot Interaction
Viaarxiv icon

CoPAL: Corrective Planning of Robot Actions with Large Language Models

Add code
Oct 11, 2023
Viaarxiv icon

Learning Type-Generalized Actions for Symbolic Planning

Add code
Aug 09, 2023
Figure 1 for Learning Type-Generalized Actions for Symbolic Planning
Figure 2 for Learning Type-Generalized Actions for Symbolic Planning
Figure 3 for Learning Type-Generalized Actions for Symbolic Planning
Figure 4 for Learning Type-Generalized Actions for Symbolic Planning
Viaarxiv icon

Intention estimation from gaze and motion features for human-robot shared-control object manipulation

Add code
Aug 18, 2022
Figure 1 for Intention estimation from gaze and motion features for human-robot shared-control object manipulation
Figure 2 for Intention estimation from gaze and motion features for human-robot shared-control object manipulation
Figure 3 for Intention estimation from gaze and motion features for human-robot shared-control object manipulation
Figure 4 for Intention estimation from gaze and motion features for human-robot shared-control object manipulation
Viaarxiv icon