Alert button
Picture for Leo Wanner

Leo Wanner

Alert button

GPT-HateCheck: Can LLMs Write Better Functional Tests for Hate Speech Detection?

Feb 23, 2024
Yiping Jin, Leo Wanner, Alexander Shvets

Viaarxiv icon

User Identity Linkage in Social Media Using Linguistic and Social Interaction Features

Aug 22, 2023
Despoina Chatzakou, Juan Soler-Company, Theodora Tsikrika, Leo Wanner, Stefanos Vrochidis, Ioannis Kompatsiaris

Figure 1 for User Identity Linkage in Social Media Using Linguistic and Social Interaction Features
Figure 2 for User Identity Linkage in Social Media Using Linguistic and Social Interaction Features
Figure 3 for User Identity Linkage in Social Media Using Linguistic and Social Interaction Features
Figure 4 for User Identity Linkage in Social Media Using Linguistic and Social Interaction Features
Viaarxiv icon

Towards Weakly-Supervised Hate Speech Classification Across Datasets

May 04, 2023
Yiping Jin, Leo Wanner, Vishakha Laxman Kadam, Alexander Shvets

Figure 1 for Towards Weakly-Supervised Hate Speech Classification Across Datasets
Figure 2 for Towards Weakly-Supervised Hate Speech Classification Across Datasets
Figure 3 for Towards Weakly-Supervised Hate Speech Classification Across Datasets
Figure 4 for Towards Weakly-Supervised Hate Speech Classification Across Datasets
Viaarxiv icon

Missing Information, Unresponsive Authors, Experimental Flaws: The Impossibility of Assessing the Reproducibility of Previous Human Evaluations in NLP

May 02, 2023
Anya Belz, Craig Thomson, Ehud Reiter, Gavin Abercrombie, Jose M. Alonso-Moral, Mohammad Arvan, Jackie Cheung, Mark Cieliebak, Elizabeth Clark, Kees van Deemter, Tanvi Dinkar, Ondřej Dušek, Steffen Eger, Qixiang Fang, Albert Gatt, Dimitra Gkatzia, Javier González-Corbelle, Dirk Hovy, Manuela Hürlimann, Takumi Ito, John D. Kelleher, Filip Klubicka, Huiyuan Lai, Chris van der Lee, Emiel van Miltenburg, Yiru Li, Saad Mahamood, Margot Mieskes, Malvina Nissim, Natalie Parde, Ondřej Plátek, Verena Rieser, Pablo Mosteiro Romero, Joel Tetreault, Antonio Toral, Xiaojun Wan, Leo Wanner, Lewis Watson, Diyi Yang

Figure 1 for Missing Information, Unresponsive Authors, Experimental Flaws: The Impossibility of Assessing the Reproducibility of Previous Human Evaluations in NLP
Figure 2 for Missing Information, Unresponsive Authors, Experimental Flaws: The Impossibility of Assessing the Reproducibility of Previous Human Evaluations in NLP
Figure 3 for Missing Information, Unresponsive Authors, Experimental Flaws: The Impossibility of Assessing the Reproducibility of Previous Human Evaluations in NLP
Figure 4 for Missing Information, Unresponsive Authors, Experimental Flaws: The Impossibility of Assessing the Reproducibility of Previous Human Evaluations in NLP
Viaarxiv icon

Multilingual Extraction and Categorization of Lexical Collocations with Graph-aware Transformers

May 23, 2022
Luis Espinosa-Anke, Alexander Shvets, Alireza Mohammadshahi, James Henderson, Leo Wanner

Figure 1 for Multilingual Extraction and Categorization of Lexical Collocations with Graph-aware Transformers
Figure 2 for Multilingual Extraction and Categorization of Lexical Collocations with Graph-aware Transformers
Figure 3 for Multilingual Extraction and Categorization of Lexical Collocations with Graph-aware Transformers
Figure 4 for Multilingual Extraction and Categorization of Lexical Collocations with Graph-aware Transformers
Viaarxiv icon

How much pretraining data do language models need to learn syntax?

Sep 09, 2021
Laura Pérez-Mayos, Miguel Ballesteros, Leo Wanner

Figure 1 for How much pretraining data do language models need to learn syntax?
Figure 2 for How much pretraining data do language models need to learn syntax?
Figure 3 for How much pretraining data do language models need to learn syntax?
Figure 4 for How much pretraining data do language models need to learn syntax?
Viaarxiv icon

Assessing the Syntactic Capabilities of Transformer-based Multilingual Language Models

May 10, 2021
Laura Pérez-Mayos, Alba Táboas García, Simon Mille, Leo Wanner

Figure 1 for Assessing the Syntactic Capabilities of Transformer-based Multilingual Language Models
Figure 2 for Assessing the Syntactic Capabilities of Transformer-based Multilingual Language Models
Figure 3 for Assessing the Syntactic Capabilities of Transformer-based Multilingual Language Models
Figure 4 for Assessing the Syntactic Capabilities of Transformer-based Multilingual Language Models
Viaarxiv icon

On the Evolution of Syntactic Information Encoded by BERT's Contextualized Representations

Feb 10, 2021
Laura Pérez-Mayos, Roberto Carlini, Miguel Ballesteros, Leo Wanner

Figure 1 for On the Evolution of Syntactic Information Encoded by BERT's Contextualized Representations
Figure 2 for On the Evolution of Syntactic Information Encoded by BERT's Contextualized Representations
Figure 3 for On the Evolution of Syntactic Information Encoded by BERT's Contextualized Representations
Figure 4 for On the Evolution of Syntactic Information Encoded by BERT's Contextualized Representations
Viaarxiv icon

Concept Extraction Using Pointer-Generator Networks

Aug 25, 2020
Alexander Shvets, Leo Wanner

Figure 1 for Concept Extraction Using Pointer-Generator Networks
Figure 2 for Concept Extraction Using Pointer-Generator Networks
Figure 3 for Concept Extraction Using Pointer-Generator Networks
Figure 4 for Concept Extraction Using Pointer-Generator Networks
Viaarxiv icon