Alert button
Picture for Matthias Kerzel

Matthias Kerzel

Alert button

University of Hamburg

Enhancing a Neurocognitive Shared Visuomotor Model for Object Identification, Localization, and Grasping With Learning From Auxiliary Tasks

Add code
Bookmark button
Alert button
Sep 26, 2020
Matthias Kerzel, Fares Abawi, Manfred Eppe, Stefan Wermter

Figure 1 for Enhancing a Neurocognitive Shared Visuomotor Model for Object Identification, Localization, and Grasping With Learning From Auxiliary Tasks
Figure 2 for Enhancing a Neurocognitive Shared Visuomotor Model for Object Identification, Localization, and Grasping With Learning From Auxiliary Tasks
Figure 3 for Enhancing a Neurocognitive Shared Visuomotor Model for Object Identification, Localization, and Grasping With Learning From Auxiliary Tasks
Figure 4 for Enhancing a Neurocognitive Shared Visuomotor Model for Object Identification, Localization, and Grasping With Learning From Auxiliary Tasks
Viaarxiv icon

Crossmodal Language Grounding in an Embodied Neurocognitive Model

Add code
Bookmark button
Alert button
Jun 24, 2020
Stefan Heinrich, Yuan Yao, Tobias Hinz, Zhiyuan Liu, Thomas Hummel, Matthias Kerzel, Cornelius Weber, Stefan Wermter

Figure 1 for Crossmodal Language Grounding in an Embodied Neurocognitive Model
Figure 2 for Crossmodal Language Grounding in an Embodied Neurocognitive Model
Figure 3 for Crossmodal Language Grounding in an Embodied Neurocognitive Model
Figure 4 for Crossmodal Language Grounding in an Embodied Neurocognitive Model
Viaarxiv icon

Explainable Goal-Driven Agents and Robots- A Comprehensive Review and New Framework

Add code
Bookmark button
Alert button
Apr 21, 2020
Fatai Sado, Chu Kiong Loo, Matthias Kerzel, Stefan Wermter

Figure 1 for Explainable Goal-Driven Agents and Robots- A Comprehensive Review and New Framework
Figure 2 for Explainable Goal-Driven Agents and Robots- A Comprehensive Review and New Framework
Figure 3 for Explainable Goal-Driven Agents and Robots- A Comprehensive Review and New Framework
Figure 4 for Explainable Goal-Driven Agents and Robots- A Comprehensive Review and New Framework
Viaarxiv icon

Improving Robot Dual-System Motor Learning with Intrinsically Motivated Meta-Control and Latent-Space Experience Imagination

Add code
Bookmark button
Alert button
Apr 19, 2020
Muhammad Burhan Hafez, Cornelius Weber, Matthias Kerzel, Stefan Wermter

Figure 1 for Improving Robot Dual-System Motor Learning with Intrinsically Motivated Meta-Control and Latent-Space Experience Imagination
Figure 2 for Improving Robot Dual-System Motor Learning with Intrinsically Motivated Meta-Control and Latent-Space Experience Imagination
Figure 3 for Improving Robot Dual-System Motor Learning with Intrinsically Motivated Meta-Control and Latent-Space Experience Imagination
Figure 4 for Improving Robot Dual-System Motor Learning with Intrinsically Motivated Meta-Control and Latent-Space Experience Imagination
Viaarxiv icon

Solving Visual Object Ambiguities when Pointing: An Unsupervised Learning Approach

Add code
Bookmark button
Alert button
Dec 13, 2019
Doreen Jirak, David Biertimpel, Matthias Kerzel, Stefan Wermter

Figure 1 for Solving Visual Object Ambiguities when Pointing: An Unsupervised Learning Approach
Figure 2 for Solving Visual Object Ambiguities when Pointing: An Unsupervised Learning Approach
Figure 3 for Solving Visual Object Ambiguities when Pointing: An Unsupervised Learning Approach
Figure 4 for Solving Visual Object Ambiguities when Pointing: An Unsupervised Learning Approach
Viaarxiv icon

Efficient Intrinsically Motivated Robotic Grasping with Learning-Adaptive Imagination in Latent Space

Add code
Bookmark button
Alert button
Oct 10, 2019
Muhammad Burhan Hafez, Cornelius Weber, Matthias Kerzel, Stefan Wermter

Figure 1 for Efficient Intrinsically Motivated Robotic Grasping with Learning-Adaptive Imagination in Latent Space
Figure 2 for Efficient Intrinsically Motivated Robotic Grasping with Learning-Adaptive Imagination in Latent Space
Figure 3 for Efficient Intrinsically Motivated Robotic Grasping with Learning-Adaptive Imagination in Latent Space
Figure 4 for Efficient Intrinsically Motivated Robotic Grasping with Learning-Adaptive Imagination in Latent Space
Viaarxiv icon

What can computational models learn from human selective attention? A review from an audiovisual crossmodal perspective

Add code
Bookmark button
Alert button
Sep 05, 2019
Di Fu, Cornelius Weber, Guochun Yang, Matthias Kerzel, Weizhi Nan, Pablo Barros, Haiyan Wu, Xun Liu, Stefan Wermter

Figure 1 for What can computational models learn from human selective attention? A review from an audiovisual crossmodal perspective
Figure 2 for What can computational models learn from human selective attention? A review from an audiovisual crossmodal perspective
Figure 3 for What can computational models learn from human selective attention? A review from an audiovisual crossmodal perspective
Figure 4 for What can computational models learn from human selective attention? A review from an audiovisual crossmodal perspective
Viaarxiv icon

Curious Meta-Controller: Adaptive Alternation between Model-Based and Model-Free Control in Deep Reinforcement Learning

Add code
Bookmark button
Alert button
May 05, 2019
Muhammad Burhan Hafez, Cornelius Weber, Matthias Kerzel, Stefan Wermter

Figure 1 for Curious Meta-Controller: Adaptive Alternation between Model-Based and Model-Free Control in Deep Reinforcement Learning
Figure 2 for Curious Meta-Controller: Adaptive Alternation between Model-Based and Model-Free Control in Deep Reinforcement Learning
Figure 3 for Curious Meta-Controller: Adaptive Alternation between Model-Based and Model-Free Control in Deep Reinforcement Learning
Figure 4 for Curious Meta-Controller: Adaptive Alternation between Model-Based and Model-Free Control in Deep Reinforcement Learning
Viaarxiv icon

Deep Intrinsically Motivated Continuous Actor-Critic for Efficient Robotic Visuomotor Skill Learning

Add code
Bookmark button
Alert button
Feb 18, 2019
Muhammad Burhan Hafez, Cornelius Weber, Matthias Kerzel, Stefan Wermter

Figure 1 for Deep Intrinsically Motivated Continuous Actor-Critic for Efficient Robotic Visuomotor Skill Learning
Figure 2 for Deep Intrinsically Motivated Continuous Actor-Critic for Efficient Robotic Visuomotor Skill Learning
Figure 3 for Deep Intrinsically Motivated Continuous Actor-Critic for Efficient Robotic Visuomotor Skill Learning
Figure 4 for Deep Intrinsically Motivated Continuous Actor-Critic for Efficient Robotic Visuomotor Skill Learning
Viaarxiv icon