Alert button
Picture for Venelin Kovatchev

Venelin Kovatchev

Alert button

Benchmark Transparency: Measuring the Impact of Data on Evaluation

Add code
Bookmark button
Alert button
Mar 31, 2024
Venelin Kovatchev, Matthew Lease

Viaarxiv icon

The State of Human-centered NLP Technology for Fact-checking

Add code
Bookmark button
Alert button
Jan 08, 2023
Anubrata Das, Houjiang Liu, Venelin Kovatchev, Matthew Lease

Figure 1 for The State of Human-centered NLP Technology for Fact-checking
Figure 2 for The State of Human-centered NLP Technology for Fact-checking
Viaarxiv icon

InferES : A Natural Language Inference Corpus for Spanish Featuring Negation-Based Contrastive and Adversarial Examples

Add code
Bookmark button
Alert button
Oct 06, 2022
Venelin Kovatchev, Mariona Taulé

Figure 1 for InferES : A Natural Language Inference Corpus for Spanish Featuring Negation-Based Contrastive and Adversarial Examples
Figure 2 for InferES : A Natural Language Inference Corpus for Spanish Featuring Negation-Based Contrastive and Adversarial Examples
Figure 3 for InferES : A Natural Language Inference Corpus for Spanish Featuring Negation-Based Contrastive and Adversarial Examples
Figure 4 for InferES : A Natural Language Inference Corpus for Spanish Featuring Negation-Based Contrastive and Adversarial Examples
Viaarxiv icon

Paraphrasing, textual entailment, and semantic similarity above word level

Add code
Bookmark button
Alert button
Aug 10, 2022
Venelin Kovatchev

Viaarxiv icon

longhorns at DADC 2022: How many linguists does it take to fool a Question Answering model? A systematic approach to adversarial attacks

Add code
Bookmark button
Alert button
Jun 29, 2022
Venelin Kovatchev, Trina Chatterjee, Venkata S Govindarajan, Jifan Chen, Eunsol Choi, Gabriella Chronis, Anubrata Das, Katrin Erk, Matthew Lease, Junyi Jessy Li, Yating Wu, Kyle Mahowald

Figure 1 for longhorns at DADC 2022: How many linguists does it take to fool a Question Answering model? A systematic approach to adversarial attacks
Figure 2 for longhorns at DADC 2022: How many linguists does it take to fool a Question Answering model? A systematic approach to adversarial attacks
Figure 3 for longhorns at DADC 2022: How many linguists does it take to fool a Question Answering model? A systematic approach to adversarial attacks
Figure 4 for longhorns at DADC 2022: How many linguists does it take to fool a Question Answering model? A systematic approach to adversarial attacks
Viaarxiv icon

Fairly Accurate: Learning Optimal Accuracy vs. Fairness Tradeoffs for Hate Speech Detection

Add code
Bookmark button
Alert button
Apr 15, 2022
Venelin Kovatchev, Soumyajit Gupta, Matthew Lease

Figure 1 for Fairly Accurate: Learning Optimal Accuracy vs. Fairness Tradeoffs for Hate Speech Detection
Figure 2 for Fairly Accurate: Learning Optimal Accuracy vs. Fairness Tradeoffs for Hate Speech Detection
Figure 3 for Fairly Accurate: Learning Optimal Accuracy vs. Fairness Tradeoffs for Hate Speech Detection
Figure 4 for Fairly Accurate: Learning Optimal Accuracy vs. Fairness Tradeoffs for Hate Speech Detection
Viaarxiv icon

ProtoTEx: Explaining Model Decisions with Prototype Tensors

Add code
Bookmark button
Alert button
Apr 11, 2022
Anubrata Das, Chitrank Gupta, Venelin Kovatchev, Matthew Lease, Junyi Jessy Li

Figure 1 for ProtoTEx: Explaining Model Decisions with Prototype Tensors
Figure 2 for ProtoTEx: Explaining Model Decisions with Prototype Tensors
Figure 3 for ProtoTEx: Explaining Model Decisions with Prototype Tensors
Figure 4 for ProtoTEx: Explaining Model Decisions with Prototype Tensors
Viaarxiv icon

NL-Augmenter: A Framework for Task-Sensitive Natural Language Augmentation

Add code
Bookmark button
Alert button
Dec 06, 2021
Kaustubh D. Dhole, Varun Gangal, Sebastian Gehrmann, Aadesh Gupta, Zhenhao Li, Saad Mahamood, Abinaya Mahendiran, Simon Mille, Ashish Srivastava, Samson Tan, Tongshuang Wu, Jascha Sohl-Dickstein, Jinho D. Choi, Eduard Hovy, Ondrej Dusek, Sebastian Ruder, Sajant Anand, Nagender Aneja, Rabin Banjade, Lisa Barthe, Hanna Behnke, Ian Berlot-Attwell, Connor Boyle, Caroline Brun, Marco Antonio Sobrevilla Cabezudo, Samuel Cahyawijaya, Emile Chapuis, Wanxiang Che, Mukund Choudhary, Christian Clauss, Pierre Colombo, Filip Cornell, Gautier Dagan, Mayukh Das, Tanay Dixit, Thomas Dopierre, Paul-Alexis Dray, Suchitra Dubey, Tatiana Ekeinhor, Marco Di Giovanni, Rishabh Gupta, Rishabh Gupta, Louanes Hamla, Sang Han, Fabrice Harel-Canada, Antoine Honore, Ishan Jindal, Przemyslaw K. Joniak, Denis Kleyko, Venelin Kovatchev, Kalpesh Krishna, Ashutosh Kumar, Stefan Langer, Seungjae Ryan Lee, Corey James Levinson, Hualou Liang, Kaizhao Liang, Zhexiong Liu, Andrey Lukyanenko, Vukosi Marivate, Gerard de Melo, Simon Meoni, Maxime Meyer, Afnan Mir, Nafise Sadat Moosavi, Niklas Muennighoff, Timothy Sum Hon Mun, Kenton Murray, Marcin Namysl, Maria Obedkova, Priti Oli, Nivranshu Pasricha, Jan Pfister, Richard Plant, Vinay Prabhu, Vasile Pais, Libo Qin, Shahab Raji, Pawan Kumar Rajpoot, Vikas Raunak, Roy Rinberg, Nicolas Roberts, Juan Diego Rodriguez, Claude Roux, Vasconcellos P. H. S., Ananya B. Sai, Robin M. Schmidt, Thomas Scialom, Tshephisho Sefara, Saqib N. Shamsi, Xudong Shen, Haoyue Shi, Yiwen Shi, Anna Shvets, Nick Siegel, Damien Sileo, Jamie Simon, Chandan Singh, Roman Sitelew, Priyank Soni, Taylor Sorensen, William Soto, Aman Srivastava, KV Aditya Srivatsa, Tony Sun, Mukund Varma T, A Tabassum, Fiona Anting Tan, Ryan Teehan, Mo Tiwari, Marie Tolkiehn, Athena Wang, Zijian Wang, Gloria Wang, Zijie J. Wang, Fuxuan Wei, Bryan Wilie, Genta Indra Winata, Xinyi Wu, Witold Wydmański, Tianbao Xie, Usama Yaseen, M. Yee, Jing Zhang, Yue Zhang

Figure 1 for NL-Augmenter: A Framework for Task-Sensitive Natural Language Augmentation
Figure 2 for NL-Augmenter: A Framework for Task-Sensitive Natural Language Augmentation
Figure 3 for NL-Augmenter: A Framework for Task-Sensitive Natural Language Augmentation
Figure 4 for NL-Augmenter: A Framework for Task-Sensitive Natural Language Augmentation
Viaarxiv icon

Can vectors read minds better than experts? Comparing data augmentation strategies for the automated scoring of children's mindreading ability

Add code
Bookmark button
Alert button
Jun 03, 2021
Venelin Kovatchev, Phillip Smith, Mark Lee, Rory Devine

Figure 1 for Can vectors read minds better than experts? Comparing data augmentation strategies for the automated scoring of children's mindreading ability
Figure 2 for Can vectors read minds better than experts? Comparing data augmentation strategies for the automated scoring of children's mindreading ability
Figure 3 for Can vectors read minds better than experts? Comparing data augmentation strategies for the automated scoring of children's mindreading ability
Figure 4 for Can vectors read minds better than experts? Comparing data augmentation strategies for the automated scoring of children's mindreading ability
Viaarxiv icon

"What is on your mind?" Automated Scoring of Mindreading in Childhood and Early Adolescence

Add code
Bookmark button
Alert button
Nov 16, 2020
Venelin Kovatchev, Phillip Smith, Mark Lee, Imogen Grumley Traynor, Irene Luque Aguilera, Rory T. Devine

Figure 1 for "What is on your mind?" Automated Scoring of Mindreading in Childhood and Early Adolescence
Figure 2 for "What is on your mind?" Automated Scoring of Mindreading in Childhood and Early Adolescence
Figure 3 for "What is on your mind?" Automated Scoring of Mindreading in Childhood and Early Adolescence
Figure 4 for "What is on your mind?" Automated Scoring of Mindreading in Childhood and Early Adolescence
Viaarxiv icon