Alert button
Picture for Bettina Berendt

Bettina Berendt

Alert button

Tik-to-Tok: Translating Language Models One Token at a Time: An Embedding Initialization Strategy for Efficient Language Adaptation

Add code
Bookmark button
Alert button
Oct 05, 2023
François Remy, Pieter Delobelle, Bettina Berendt, Kris Demuynck, Thomas Demeester

Viaarxiv icon

Bias, diversity, and challenges to fairness in classification and automated text analysis. From libraries to AI and back

Add code
Bookmark button
Alert button
Mar 07, 2023
Bettina Berendt, Özgür Karadeniz, Sercan Kıyak, Stefan Mertens, Leen d'Haenens

Viaarxiv icon

Domain Adaptive Decision Trees: Implications for Accuracy and Fairness

Add code
Bookmark button
Alert button
Feb 27, 2023
Jose M. Alvarez, Kristen M. Scott, Salvatore Ruggieri, Bettina Berendt

Figure 1 for Domain Adaptive Decision Trees: Implications for Accuracy and Fairness
Figure 2 for Domain Adaptive Decision Trees: Implications for Accuracy and Fairness
Figure 3 for Domain Adaptive Decision Trees: Implications for Accuracy and Fairness
Figure 4 for Domain Adaptive Decision Trees: Implications for Accuracy and Fairness
Viaarxiv icon

How Far Can It Go?: On Intrinsic Gender Bias Mitigation for Text Classification

Add code
Bookmark button
Alert button
Jan 30, 2023
Ewoenam Tokpo, Pieter Delobelle, Bettina Berendt, Toon Calders

Figure 1 for How Far Can It Go?: On Intrinsic Gender Bias Mitigation for Text Classification
Figure 2 for How Far Can It Go?: On Intrinsic Gender Bias Mitigation for Text Classification
Figure 3 for How Far Can It Go?: On Intrinsic Gender Bias Mitigation for Text Classification
Figure 4 for How Far Can It Go?: On Intrinsic Gender Bias Mitigation for Text Classification
Viaarxiv icon

Political representation bias in DBpedia and Wikidata as a challenge for downstream processing

Add code
Bookmark button
Alert button
Dec 29, 2022
Ozgur Karadeniz, Bettina Berendt, Sercan Kiyak, Stefan Mertens, Leen d'Haenens

Figure 1 for Political representation bias in DBpedia and Wikidata as a challenge for downstream processing
Figure 2 for Political representation bias in DBpedia and Wikidata as a challenge for downstream processing
Figure 3 for Political representation bias in DBpedia and Wikidata as a challenge for downstream processing
Viaarxiv icon

RobBERT-2022: Updating a Dutch Language Model to Account for Evolving Language Use

Add code
Bookmark button
Alert button
Nov 15, 2022
Pieter Delobelle, Thomas Winters, Bettina Berendt

Figure 1 for RobBERT-2022: Updating a Dutch Language Model to Account for Evolving Language Use
Figure 2 for RobBERT-2022: Updating a Dutch Language Model to Account for Evolving Language Use
Figure 3 for RobBERT-2022: Updating a Dutch Language Model to Account for Evolving Language Use
Figure 4 for RobBERT-2022: Updating a Dutch Language Model to Account for Evolving Language Use
Viaarxiv icon

FairDistillation: Mitigating Stereotyping in Language Models

Add code
Bookmark button
Alert button
Jul 10, 2022
Pieter Delobelle, Bettina Berendt

Figure 1 for FairDistillation: Mitigating Stereotyping in Language Models
Figure 2 for FairDistillation: Mitigating Stereotyping in Language Models
Figure 3 for FairDistillation: Mitigating Stereotyping in Language Models
Figure 4 for FairDistillation: Mitigating Stereotyping in Language Models
Viaarxiv icon

RobBERTje: a Distilled Dutch BERT Model

Add code
Bookmark button
Alert button
Apr 28, 2022
Pieter Delobelle, Thomas Winters, Bettina Berendt

Figure 1 for RobBERTje: a Distilled Dutch BERT Model
Figure 2 for RobBERTje: a Distilled Dutch BERT Model
Figure 3 for RobBERTje: a Distilled Dutch BERT Model
Figure 4 for RobBERTje: a Distilled Dutch BERT Model
Viaarxiv icon

Measuring Fairness with Biased Rulers: A Survey on Quantifying Biases in Pretrained Language Models

Add code
Bookmark button
Alert button
Dec 14, 2021
Pieter Delobelle, Ewoenam Kwaku Tokpo, Toon Calders, Bettina Berendt

Figure 1 for Measuring Fairness with Biased Rulers: A Survey on Quantifying Biases in Pretrained Language Models
Figure 2 for Measuring Fairness with Biased Rulers: A Survey on Quantifying Biases in Pretrained Language Models
Figure 3 for Measuring Fairness with Biased Rulers: A Survey on Quantifying Biases in Pretrained Language Models
Figure 4 for Measuring Fairness with Biased Rulers: A Survey on Quantifying Biases in Pretrained Language Models
Viaarxiv icon

Whistleblower protection in the digital age -- why 'anonymous' is not enough. Towards an interdisciplinary view of ethical dilemmas

Add code
Bookmark button
Alert button
Nov 11, 2021
Bettina Berendt, Stefan Schiffner

Figure 1 for Whistleblower protection in the digital age -- why 'anonymous' is not enough. Towards an interdisciplinary view of ethical dilemmas
Viaarxiv icon