Picture for Matthias Kerzel

Matthias Kerzel

University of Hamburg

Clarifying the Half Full or Half Empty Question: Multimodal Container Classification

Add code
Jul 17, 2023
Figure 1 for Clarifying the Half Full or Half Empty Question: Multimodal Container Classification
Figure 2 for Clarifying the Half Full or Half Empty Question: Multimodal Container Classification
Figure 3 for Clarifying the Half Full or Half Empty Question: Multimodal Container Classification
Figure 4 for Clarifying the Half Full or Half Empty Question: Multimodal Container Classification
Viaarxiv icon

NICOL: A Neuro-inspired Collaborative Semi-humanoid Robot that Bridges Social Interaction and Reliable Manipulation

Add code
May 15, 2023
Figure 1 for NICOL: A Neuro-inspired Collaborative Semi-humanoid Robot that Bridges Social Interaction and Reliable Manipulation
Figure 2 for NICOL: A Neuro-inspired Collaborative Semi-humanoid Robot that Bridges Social Interaction and Reliable Manipulation
Figure 3 for NICOL: A Neuro-inspired Collaborative Semi-humanoid Robot that Bridges Social Interaction and Reliable Manipulation
Figure 4 for NICOL: A Neuro-inspired Collaborative Semi-humanoid Robot that Bridges Social Interaction and Reliable Manipulation
Viaarxiv icon

Learning Bidirectional Action-Language Translation with Limited Supervision and Incongruent Extra Input

Add code
Jan 09, 2023
Viaarxiv icon

Neuro-Symbolic Spatio-Temporal Reasoning

Add code
Nov 28, 2022
Viaarxiv icon

Learning to Autonomously Reach Objects with NICO and Grow-When-Required Networks

Add code
Oct 17, 2022
Figure 1 for Learning to Autonomously Reach Objects with NICO and Grow-When-Required Networks
Figure 2 for Learning to Autonomously Reach Objects with NICO and Grow-When-Required Networks
Figure 3 for Learning to Autonomously Reach Objects with NICO and Grow-When-Required Networks
Figure 4 for Learning to Autonomously Reach Objects with NICO and Grow-When-Required Networks
Viaarxiv icon

Intelligent problem-solving as integrated hierarchical reinforcement learning

Add code
Aug 18, 2022
Viaarxiv icon

Learning Flexible Translation between Robot Actions and Language Descriptions

Add code
Jul 15, 2022
Figure 1 for Learning Flexible Translation between Robot Actions and Language Descriptions
Figure 2 for Learning Flexible Translation between Robot Actions and Language Descriptions
Figure 3 for Learning Flexible Translation between Robot Actions and Language Descriptions
Figure 4 for Learning Flexible Translation between Robot Actions and Language Descriptions
Viaarxiv icon

Knowing Earlier what Right Means to You: A Comprehensive VQA Dataset for Grounding Relative Directions via Multi-Task Learning

Add code
Jul 06, 2022
Figure 1 for Knowing Earlier what Right Means to You: A Comprehensive VQA Dataset for Grounding Relative Directions via Multi-Task Learning
Figure 2 for Knowing Earlier what Right Means to You: A Comprehensive VQA Dataset for Grounding Relative Directions via Multi-Task Learning
Figure 3 for Knowing Earlier what Right Means to You: A Comprehensive VQA Dataset for Grounding Relative Directions via Multi-Task Learning
Figure 4 for Knowing Earlier what Right Means to You: A Comprehensive VQA Dataset for Grounding Relative Directions via Multi-Task Learning
Viaarxiv icon

What is Right for Me is Not Yet Right for You: A Dataset for Grounding Relative Directions via Multi-Task Learning

Add code
May 05, 2022
Figure 1 for What is Right for Me is Not Yet Right for You: A Dataset for Grounding Relative Directions via Multi-Task Learning
Figure 2 for What is Right for Me is Not Yet Right for You: A Dataset for Grounding Relative Directions via Multi-Task Learning
Figure 3 for What is Right for Me is Not Yet Right for You: A Dataset for Grounding Relative Directions via Multi-Task Learning
Figure 4 for What is Right for Me is Not Yet Right for You: A Dataset for Grounding Relative Directions via Multi-Task Learning
Viaarxiv icon

Language Model-Based Paired Variational Autoencoders for Robotic Language Learning

Add code
Jan 17, 2022
Figure 1 for Language Model-Based Paired Variational Autoencoders for Robotic Language Learning
Figure 2 for Language Model-Based Paired Variational Autoencoders for Robotic Language Learning
Figure 3 for Language Model-Based Paired Variational Autoencoders for Robotic Language Learning
Figure 4 for Language Model-Based Paired Variational Autoencoders for Robotic Language Learning
Viaarxiv icon