Picture for Dietrich Klakow

Dietrich Klakow

Few-shot Fine-tuning vs. In-context Learning: A Fair Comparison and Evaluation

Add code
May 30, 2023
Viaarxiv icon

Weaker Than You Think: A Critical Look atWeakly Supervised Learning

Add code
May 27, 2023
Figure 1 for Weaker Than You Think: A Critical Look atWeakly Supervised Learning
Figure 2 for Weaker Than You Think: A Critical Look atWeakly Supervised Learning
Figure 3 for Weaker Than You Think: A Critical Look atWeakly Supervised Learning
Figure 4 for Weaker Than You Think: A Critical Look atWeakly Supervised Learning
Viaarxiv icon

MasakhaPOS: Part-of-Speech Tagging for Typologically Diverse African Languages

Add code
May 23, 2023
Figure 1 for MasakhaPOS: Part-of-Speech Tagging for Typologically Diverse African Languages
Figure 2 for MasakhaPOS: Part-of-Speech Tagging for Typologically Diverse African Languages
Figure 3 for MasakhaPOS: Part-of-Speech Tagging for Typologically Diverse African Languages
Figure 4 for MasakhaPOS: Part-of-Speech Tagging for Typologically Diverse African Languages
Viaarxiv icon

$\varepsilon$ KÚ <MASK>: Integrating Yorùbá cultural greetings into machine translation

Add code
Apr 24, 2023
Viaarxiv icon

Analyzing the Representational Geometry of Acoustic Word Embeddings

Add code
Jan 08, 2023
Viaarxiv icon

A Data-Driven Investigation of Noise-Adaptive Utterance Generation with Linguistic Modification

Add code
Oct 19, 2022
Figure 1 for A Data-Driven Investigation of Noise-Adaptive Utterance Generation with Linguistic Modification
Figure 2 for A Data-Driven Investigation of Noise-Adaptive Utterance Generation with Linguistic Modification
Figure 3 for A Data-Driven Investigation of Noise-Adaptive Utterance Generation with Linguistic Modification
Figure 4 for A Data-Driven Investigation of Noise-Adaptive Utterance Generation with Linguistic Modification
Viaarxiv icon

Integrating Form and Meaning: A Multi-Task Learning Model for Acoustic Word Embeddings

Add code
Sep 18, 2022
Figure 1 for Integrating Form and Meaning: A Multi-Task Learning Model for Acoustic Word Embeddings
Figure 2 for Integrating Form and Meaning: A Multi-Task Learning Model for Acoustic Word Embeddings
Figure 3 for Integrating Form and Meaning: A Multi-Task Learning Model for Acoustic Word Embeddings
Figure 4 for Integrating Form and Meaning: A Multi-Task Learning Model for Acoustic Word Embeddings
Viaarxiv icon

Fusing Sentence Embeddings Into LSTM-based Autoregressive Language Models

Add code
Aug 05, 2022
Figure 1 for Fusing Sentence Embeddings Into LSTM-based Autoregressive Language Models
Figure 2 for Fusing Sentence Embeddings Into LSTM-based Autoregressive Language Models
Figure 3 for Fusing Sentence Embeddings Into LSTM-based Autoregressive Language Models
Figure 4 for Fusing Sentence Embeddings Into LSTM-based Autoregressive Language Models
Viaarxiv icon

TOKEN is a MASK: Few-shot Named Entity Recognition with Pre-trained Language Models

Add code
Jun 15, 2022
Figure 1 for TOKEN is a MASK: Few-shot Named Entity Recognition with Pre-trained Language Models
Figure 2 for TOKEN is a MASK: Few-shot Named Entity Recognition with Pre-trained Language Models
Figure 3 for TOKEN is a MASK: Few-shot Named Entity Recognition with Pre-trained Language Models
Figure 4 for TOKEN is a MASK: Few-shot Named Entity Recognition with Pre-trained Language Models
Viaarxiv icon

Task-Adaptive Pre-Training for Boosting Learning With Noisy Labels: A Study on Text Classification for African Languages

Add code
Jun 03, 2022
Figure 1 for Task-Adaptive Pre-Training for Boosting Learning With Noisy Labels: A Study on Text Classification for African Languages
Figure 2 for Task-Adaptive Pre-Training for Boosting Learning With Noisy Labels: A Study on Text Classification for African Languages
Figure 3 for Task-Adaptive Pre-Training for Boosting Learning With Noisy Labels: A Study on Text Classification for African Languages
Viaarxiv icon