Alert button
Picture for Cornelius Weber

Cornelius Weber

Alert button

Whose Emotion Matters? Speaker Detection without Prior Knowledge

Add code
Bookmark button
Alert button
Nov 23, 2022
Hugo Carneiro, Cornelius Weber, Stefan Wermter

Figure 1 for Whose Emotion Matters? Speaker Detection without Prior Knowledge
Figure 2 for Whose Emotion Matters? Speaker Detection without Prior Knowledge
Figure 3 for Whose Emotion Matters? Speaker Detection without Prior Knowledge
Figure 4 for Whose Emotion Matters? Speaker Detection without Prior Knowledge
Viaarxiv icon

Visually Grounded Commonsense Knowledge Acquisition

Add code
Bookmark button
Alert button
Nov 22, 2022
Yuan Yao, Tianyu Yu, Ao Zhang, Mengdi Li, Ruobing Xie, Cornelius Weber, Zhiyuan Liu, Haitao Zheng, Stefan Wermter, Tat-Seng Chua, Maosong Sun

Figure 1 for Visually Grounded Commonsense Knowledge Acquisition
Figure 2 for Visually Grounded Commonsense Knowledge Acquisition
Figure 3 for Visually Grounded Commonsense Knowledge Acquisition
Figure 4 for Visually Grounded Commonsense Knowledge Acquisition
Viaarxiv icon

Data Augmentation with Unsupervised Speaking Style Transfer for Speech Emotion Recognition

Add code
Bookmark button
Alert button
Nov 16, 2022
Leyuan Qu, Wei Wang, Taihao Li, Cornelius Weber, Stefan Wermter, Fuji Ren

Figure 1 for Data Augmentation with Unsupervised Speaking Style Transfer for Speech Emotion Recognition
Figure 2 for Data Augmentation with Unsupervised Speaking Style Transfer for Speech Emotion Recognition
Figure 3 for Data Augmentation with Unsupervised Speaking Style Transfer for Speech Emotion Recognition
Figure 4 for Data Augmentation with Unsupervised Speaking Style Transfer for Speech Emotion Recognition
Viaarxiv icon

Impact Makes a Sound and Sound Makes an Impact: Sound Guides Representations and Explorations

Add code
Bookmark button
Alert button
Aug 04, 2022
Xufeng Zhao, Cornelius Weber, Muhammad Burhan Hafez, Stefan Wermter

Figure 1 for Impact Makes a Sound and Sound Makes an Impact: Sound Guides Representations and Explorations
Figure 2 for Impact Makes a Sound and Sound Makes an Impact: Sound Guides Representations and Explorations
Figure 3 for Impact Makes a Sound and Sound Makes an Impact: Sound Guides Representations and Explorations
Figure 4 for Impact Makes a Sound and Sound Makes an Impact: Sound Guides Representations and Explorations
Viaarxiv icon

Learning Flexible Translation between Robot Actions and Language Descriptions

Add code
Bookmark button
Alert button
Jul 15, 2022
Ozan Özdemir, Matthias Kerzel, Cornelius Weber, Jae Hee Lee, Stefan Wermter

Figure 1 for Learning Flexible Translation between Robot Actions and Language Descriptions
Figure 2 for Learning Flexible Translation between Robot Actions and Language Descriptions
Figure 3 for Learning Flexible Translation between Robot Actions and Language Descriptions
Figure 4 for Learning Flexible Translation between Robot Actions and Language Descriptions
Viaarxiv icon

Knowing Earlier what Right Means to You: A Comprehensive VQA Dataset for Grounding Relative Directions via Multi-Task Learning

Add code
Bookmark button
Alert button
Jul 06, 2022
Kyra Ahrens, Matthias Kerzel, Jae Hee Lee, Cornelius Weber, Stefan Wermter

Figure 1 for Knowing Earlier what Right Means to You: A Comprehensive VQA Dataset for Grounding Relative Directions via Multi-Task Learning
Figure 2 for Knowing Earlier what Right Means to You: A Comprehensive VQA Dataset for Grounding Relative Directions via Multi-Task Learning
Figure 3 for Knowing Earlier what Right Means to You: A Comprehensive VQA Dataset for Grounding Relative Directions via Multi-Task Learning
Figure 4 for Knowing Earlier what Right Means to You: A Comprehensive VQA Dataset for Grounding Relative Directions via Multi-Task Learning
Viaarxiv icon

What is Right for Me is Not Yet Right for You: A Dataset for Grounding Relative Directions via Multi-Task Learning

Add code
Bookmark button
Alert button
May 05, 2022
Jae Hee Lee, Matthias Kerzel, Kyra Ahrens, Cornelius Weber, Stefan Wermter

Figure 1 for What is Right for Me is Not Yet Right for You: A Dataset for Grounding Relative Directions via Multi-Task Learning
Figure 2 for What is Right for Me is Not Yet Right for You: A Dataset for Grounding Relative Directions via Multi-Task Learning
Figure 3 for What is Right for Me is Not Yet Right for You: A Dataset for Grounding Relative Directions via Multi-Task Learning
Figure 4 for What is Right for Me is Not Yet Right for You: A Dataset for Grounding Relative Directions via Multi-Task Learning
Viaarxiv icon

A Multimodal German Dataset for Automatic Lip Reading Systems and Transfer Learning

Add code
Bookmark button
Alert button
Feb 27, 2022
Gerald Schwiebert, Cornelius Weber, Leyuan Qu, Henrique Siqueira, Stefan Wermter

Figure 1 for A Multimodal German Dataset for Automatic Lip Reading Systems and Transfer Learning
Figure 2 for A Multimodal German Dataset for Automatic Lip Reading Systems and Transfer Learning
Figure 3 for A Multimodal German Dataset for Automatic Lip Reading Systems and Transfer Learning
Figure 4 for A Multimodal German Dataset for Automatic Lip Reading Systems and Transfer Learning
Viaarxiv icon

Language Model-Based Paired Variational Autoencoders for Robotic Language Learning

Add code
Bookmark button
Alert button
Jan 17, 2022
Ozan Özdemir, Matthias Kerzel, Cornelius Weber, Jae Hee Lee, Stefan Wermter

Figure 1 for Language Model-Based Paired Variational Autoencoders for Robotic Language Learning
Figure 2 for Language Model-Based Paired Variational Autoencoders for Robotic Language Learning
Figure 3 for Language Model-Based Paired Variational Autoencoders for Robotic Language Learning
Figure 4 for Language Model-Based Paired Variational Autoencoders for Robotic Language Learning
Viaarxiv icon

LipSound2: Self-Supervised Pre-Training for Lip-to-Speech Reconstruction and Lip Reading

Add code
Bookmark button
Alert button
Dec 09, 2021
Leyuan Qu, Cornelius Weber, Stefan Wermter

Figure 1 for LipSound2: Self-Supervised Pre-Training for Lip-to-Speech Reconstruction and Lip Reading
Figure 2 for LipSound2: Self-Supervised Pre-Training for Lip-to-Speech Reconstruction and Lip Reading
Figure 3 for LipSound2: Self-Supervised Pre-Training for Lip-to-Speech Reconstruction and Lip Reading
Figure 4 for LipSound2: Self-Supervised Pre-Training for Lip-to-Speech Reconstruction and Lip Reading
Viaarxiv icon