Picture for Cornelius Weber

Cornelius Weber

Improving Model-Based Reinforcement Learning with Internal State Representations through Self-Supervision

Add code
Feb 10, 2021
Figure 1 for Improving Model-Based Reinforcement Learning with Internal State Representations through Self-Supervision
Figure 2 for Improving Model-Based Reinforcement Learning with Internal State Representations through Self-Supervision
Figure 3 for Improving Model-Based Reinforcement Learning with Internal State Representations through Self-Supervision
Figure 4 for Improving Model-Based Reinforcement Learning with Internal State Representations through Self-Supervision
Viaarxiv icon

Crossmodal Language Grounding in an Embodied Neurocognitive Model

Add code
Jun 24, 2020
Figure 1 for Crossmodal Language Grounding in an Embodied Neurocognitive Model
Figure 2 for Crossmodal Language Grounding in an Embodied Neurocognitive Model
Figure 3 for Crossmodal Language Grounding in an Embodied Neurocognitive Model
Figure 4 for Crossmodal Language Grounding in an Embodied Neurocognitive Model
Viaarxiv icon

Improving Robot Dual-System Motor Learning with Intrinsically Motivated Meta-Control and Latent-Space Experience Imagination

Add code
Apr 19, 2020
Figure 1 for Improving Robot Dual-System Motor Learning with Intrinsically Motivated Meta-Control and Latent-Space Experience Imagination
Figure 2 for Improving Robot Dual-System Motor Learning with Intrinsically Motivated Meta-Control and Latent-Space Experience Imagination
Figure 3 for Improving Robot Dual-System Motor Learning with Intrinsically Motivated Meta-Control and Latent-Space Experience Imagination
Figure 4 for Improving Robot Dual-System Motor Learning with Intrinsically Motivated Meta-Control and Latent-Space Experience Imagination
Viaarxiv icon

Enriching Existing Conversational Emotion Datasets with Dialogue Acts using Neural Annotators

Add code
Dec 05, 2019
Figure 1 for Enriching Existing Conversational Emotion Datasets with Dialogue Acts using Neural Annotators
Figure 2 for Enriching Existing Conversational Emotion Datasets with Dialogue Acts using Neural Annotators
Figure 3 for Enriching Existing Conversational Emotion Datasets with Dialogue Acts using Neural Annotators
Figure 4 for Enriching Existing Conversational Emotion Datasets with Dialogue Acts using Neural Annotators
Viaarxiv icon

Periodic Spectral Ergodicity: A Complexity Measure for Deep Neural Networks and Neural Architecture Search

Add code
Nov 10, 2019
Figure 1 for Periodic Spectral Ergodicity: A Complexity Measure for Deep Neural Networks and Neural Architecture Search
Figure 2 for Periodic Spectral Ergodicity: A Complexity Measure for Deep Neural Networks and Neural Architecture Search
Figure 3 for Periodic Spectral Ergodicity: A Complexity Measure for Deep Neural Networks and Neural Architecture Search
Figure 4 for Periodic Spectral Ergodicity: A Complexity Measure for Deep Neural Networks and Neural Architecture Search
Viaarxiv icon

Efficient Intrinsically Motivated Robotic Grasping with Learning-Adaptive Imagination in Latent Space

Add code
Oct 10, 2019
Figure 1 for Efficient Intrinsically Motivated Robotic Grasping with Learning-Adaptive Imagination in Latent Space
Figure 2 for Efficient Intrinsically Motivated Robotic Grasping with Learning-Adaptive Imagination in Latent Space
Figure 3 for Efficient Intrinsically Motivated Robotic Grasping with Learning-Adaptive Imagination in Latent Space
Figure 4 for Efficient Intrinsically Motivated Robotic Grasping with Learning-Adaptive Imagination in Latent Space
Viaarxiv icon

What can computational models learn from human selective attention? A review from an audiovisual crossmodal perspective

Add code
Sep 05, 2019
Figure 1 for What can computational models learn from human selective attention? A review from an audiovisual crossmodal perspective
Figure 2 for What can computational models learn from human selective attention? A review from an audiovisual crossmodal perspective
Figure 3 for What can computational models learn from human selective attention? A review from an audiovisual crossmodal perspective
Figure 4 for What can computational models learn from human selective attention? A review from an audiovisual crossmodal perspective
Viaarxiv icon

Curious Meta-Controller: Adaptive Alternation between Model-Based and Model-Free Control in Deep Reinforcement Learning

Add code
May 05, 2019
Figure 1 for Curious Meta-Controller: Adaptive Alternation between Model-Based and Model-Free Control in Deep Reinforcement Learning
Figure 2 for Curious Meta-Controller: Adaptive Alternation between Model-Based and Model-Free Control in Deep Reinforcement Learning
Figure 3 for Curious Meta-Controller: Adaptive Alternation between Model-Based and Model-Free Control in Deep Reinforcement Learning
Figure 4 for Curious Meta-Controller: Adaptive Alternation between Model-Based and Model-Free Control in Deep Reinforcement Learning
Viaarxiv icon

KT-Speech-Crawler: Automatic Dataset Construction for Speech Recognition from YouTube Videos

Add code
Mar 01, 2019
Figure 1 for KT-Speech-Crawler: Automatic Dataset Construction for Speech Recognition from YouTube Videos
Figure 2 for KT-Speech-Crawler: Automatic Dataset Construction for Speech Recognition from YouTube Videos
Figure 3 for KT-Speech-Crawler: Automatic Dataset Construction for Speech Recognition from YouTube Videos
Figure 4 for KT-Speech-Crawler: Automatic Dataset Construction for Speech Recognition from YouTube Videos
Viaarxiv icon

Incorporating End-to-End Speech Recognition Models for Sentiment Analysis

Add code
Feb 28, 2019
Figure 1 for Incorporating End-to-End Speech Recognition Models for Sentiment Analysis
Figure 2 for Incorporating End-to-End Speech Recognition Models for Sentiment Analysis
Figure 3 for Incorporating End-to-End Speech Recognition Models for Sentiment Analysis
Figure 4 for Incorporating End-to-End Speech Recognition Models for Sentiment Analysis
Viaarxiv icon