Alert button
Picture for Gunther Heidemann

Gunther Heidemann

Alert button

Learning Disentangled Audio Representations through Controlled Synthesis

Add code
Bookmark button
Alert button
Feb 16, 2024
Yusuf Brima, Ulf Krumnack, Simone Pika, Gunther Heidemann

Viaarxiv icon

Show Me How It's Done: The Role of Explanations in Fine-Tuning Language Models

Add code
Bookmark button
Alert button
Feb 12, 2024
Mohamad Ballout, Ulf Krumnack, Gunther Heidemann, Kai-Uwe Kuehnberger

Viaarxiv icon

Learning Disentangled Speech Representations

Add code
Bookmark button
Alert button
Nov 04, 2023
Yusuf Brima, Ulf Krumnack, Simone Pika, Gunther Heidemann

Viaarxiv icon

Understanding Self-Supervised Learning of Speech Representation via Invariance and Redundancy Reduction

Add code
Bookmark button
Alert button
Sep 07, 2023
Yusuf Brima, Ulf Krumnack, Simone Pika, Gunther Heidemann

Figure 1 for Understanding Self-Supervised Learning of Speech Representation via Invariance and Redundancy Reduction
Figure 2 for Understanding Self-Supervised Learning of Speech Representation via Invariance and Redundancy Reduction
Figure 3 for Understanding Self-Supervised Learning of Speech Representation via Invariance and Redundancy Reduction
Figure 4 for Understanding Self-Supervised Learning of Speech Representation via Invariance and Redundancy Reduction
Viaarxiv icon

Investigating Pre-trained Language Models on Cross-Domain Datasets, a Step Closer to General AI

Add code
Bookmark button
Alert button
Jun 21, 2023
Mohamad Ballout, Ulf Krumnack, Gunther Heidemann, Kai-Uwe Kühnberger

Figure 1 for Investigating Pre-trained Language Models on Cross-Domain Datasets, a Step Closer to General AI
Figure 2 for Investigating Pre-trained Language Models on Cross-Domain Datasets, a Step Closer to General AI
Figure 3 for Investigating Pre-trained Language Models on Cross-Domain Datasets, a Step Closer to General AI
Figure 4 for Investigating Pre-trained Language Models on Cross-Domain Datasets, a Step Closer to General AI
Viaarxiv icon

Opening the Black Box: Analyzing Attention Weights and Hidden States in Pre-trained Language Models for Non-language Tasks

Add code
Bookmark button
Alert button
Jun 21, 2023
Mohamad Ballout, Ulf Krumnack, Gunther Heidemann, Kai-Uwe Kühnberger

Figure 1 for Opening the Black Box: Analyzing Attention Weights and Hidden States in Pre-trained Language Models for Non-language Tasks
Figure 2 for Opening the Black Box: Analyzing Attention Weights and Hidden States in Pre-trained Language Models for Non-language Tasks
Figure 3 for Opening the Black Box: Analyzing Attention Weights and Hidden States in Pre-trained Language Models for Non-language Tasks
Figure 4 for Opening the Black Box: Analyzing Attention Weights and Hidden States in Pre-trained Language Models for Non-language Tasks
Viaarxiv icon