Picture for Cornelius Weber

Cornelius Weber

Internally Rewarded Reinforcement Learning

Add code
Feb 01, 2023
Viaarxiv icon

Learning Bidirectional Action-Language Translation with Limited Supervision and Incongruent Extra Input

Add code
Jan 09, 2023
Viaarxiv icon

Disentangling Prosody Representations with Unsupervised Speech Reconstruction

Add code
Dec 14, 2022
Viaarxiv icon

Whose Emotion Matters? Speaker Detection without Prior Knowledge

Add code
Dec 08, 2022
Viaarxiv icon

Visually Grounded Commonsense Knowledge Acquisition

Add code
Nov 22, 2022
Viaarxiv icon

Data Augmentation with Unsupervised Speaking Style Transfer for Speech Emotion Recognition

Add code
Nov 16, 2022
Figure 1 for Data Augmentation with Unsupervised Speaking Style Transfer for Speech Emotion Recognition
Figure 2 for Data Augmentation with Unsupervised Speaking Style Transfer for Speech Emotion Recognition
Figure 3 for Data Augmentation with Unsupervised Speaking Style Transfer for Speech Emotion Recognition
Figure 4 for Data Augmentation with Unsupervised Speaking Style Transfer for Speech Emotion Recognition
Viaarxiv icon

Impact Makes a Sound and Sound Makes an Impact: Sound Guides Representations and Explorations

Add code
Aug 04, 2022
Figure 1 for Impact Makes a Sound and Sound Makes an Impact: Sound Guides Representations and Explorations
Figure 2 for Impact Makes a Sound and Sound Makes an Impact: Sound Guides Representations and Explorations
Figure 3 for Impact Makes a Sound and Sound Makes an Impact: Sound Guides Representations and Explorations
Figure 4 for Impact Makes a Sound and Sound Makes an Impact: Sound Guides Representations and Explorations
Viaarxiv icon

Learning Flexible Translation between Robot Actions and Language Descriptions

Add code
Jul 15, 2022
Figure 1 for Learning Flexible Translation between Robot Actions and Language Descriptions
Figure 2 for Learning Flexible Translation between Robot Actions and Language Descriptions
Figure 3 for Learning Flexible Translation between Robot Actions and Language Descriptions
Figure 4 for Learning Flexible Translation between Robot Actions and Language Descriptions
Viaarxiv icon

Knowing Earlier what Right Means to You: A Comprehensive VQA Dataset for Grounding Relative Directions via Multi-Task Learning

Add code
Jul 06, 2022
Figure 1 for Knowing Earlier what Right Means to You: A Comprehensive VQA Dataset for Grounding Relative Directions via Multi-Task Learning
Figure 2 for Knowing Earlier what Right Means to You: A Comprehensive VQA Dataset for Grounding Relative Directions via Multi-Task Learning
Figure 3 for Knowing Earlier what Right Means to You: A Comprehensive VQA Dataset for Grounding Relative Directions via Multi-Task Learning
Figure 4 for Knowing Earlier what Right Means to You: A Comprehensive VQA Dataset for Grounding Relative Directions via Multi-Task Learning
Viaarxiv icon

What is Right for Me is Not Yet Right for You: A Dataset for Grounding Relative Directions via Multi-Task Learning

Add code
May 05, 2022
Figure 1 for What is Right for Me is Not Yet Right for You: A Dataset for Grounding Relative Directions via Multi-Task Learning
Figure 2 for What is Right for Me is Not Yet Right for You: A Dataset for Grounding Relative Directions via Multi-Task Learning
Figure 3 for What is Right for Me is Not Yet Right for You: A Dataset for Grounding Relative Directions via Multi-Task Learning
Figure 4 for What is Right for Me is Not Yet Right for You: A Dataset for Grounding Relative Directions via Multi-Task Learning
Viaarxiv icon