Picture for Alexander Panchenko

Alexander Panchenko

Demarked: A Strategy for Enhanced Abusive Speech Moderation through Counterspeech, Detoxification, and Message Management

Add code
Jun 27, 2024
Figure 1 for Demarked: A Strategy for Enhanced Abusive Speech Moderation through Counterspeech, Detoxification, and Message Management
Figure 2 for Demarked: A Strategy for Enhanced Abusive Speech Moderation through Counterspeech, Detoxification, and Message Management
Figure 3 for Demarked: A Strategy for Enhanced Abusive Speech Moderation through Counterspeech, Detoxification, and Message Management
Figure 4 for Demarked: A Strategy for Enhanced Abusive Speech Moderation through Counterspeech, Detoxification, and Message Management
Viaarxiv icon

S3: A Simple Strong Sample-effective Multimodal Dialog System

Add code
Jun 26, 2024
Viaarxiv icon

Benchmarking Uncertainty Quantification Methods for Large Language Models with LM-Polygraph

Add code
Jun 21, 2024
Viaarxiv icon

xCOMET-lite: Bridging the Gap Between Efficiency and Quality in Learned MT Evaluation Metrics

Add code
Jun 20, 2024
Viaarxiv icon

SmurfCat at SemEval-2024 Task 6: Leveraging Synthetic Data for Hallucination Detection

Add code
Apr 09, 2024
Viaarxiv icon

MultiParaDetox: Extending Text Detoxification with Parallel Data to New Languages

Add code
Apr 02, 2024
Viaarxiv icon

TaxoLLaMA: WordNet-based Model for Solving Multiple Lexical Sematic Tasks

Add code
Mar 14, 2024
Figure 1 for TaxoLLaMA: WordNet-based Model for Solving Multiple Lexical Sematic Tasks
Figure 2 for TaxoLLaMA: WordNet-based Model for Solving Multiple Lexical Sematic Tasks
Figure 3 for TaxoLLaMA: WordNet-based Model for Solving Multiple Lexical Sematic Tasks
Figure 4 for TaxoLLaMA: WordNet-based Model for Solving Multiple Lexical Sematic Tasks
Viaarxiv icon

Fact-Checking the Output of Large Language Models via Token-Level Uncertainty Quantification

Add code
Mar 07, 2024
Figure 1 for Fact-Checking the Output of Large Language Models via Token-Level Uncertainty Quantification
Figure 2 for Fact-Checking the Output of Large Language Models via Token-Level Uncertainty Quantification
Figure 3 for Fact-Checking the Output of Large Language Models via Token-Level Uncertainty Quantification
Figure 4 for Fact-Checking the Output of Large Language Models via Token-Level Uncertainty Quantification
Viaarxiv icon

MERA: A Comprehensive LLM Evaluation in Russian

Add code
Jan 12, 2024
Figure 1 for MERA: A Comprehensive LLM Evaluation in Russian
Figure 2 for MERA: A Comprehensive LLM Evaluation in Russian
Figure 3 for MERA: A Comprehensive LLM Evaluation in Russian
Figure 4 for MERA: A Comprehensive LLM Evaluation in Russian
Viaarxiv icon

Exploring Methods for Cross-lingual Text Style Transfer: The Case of Text Detoxification

Add code
Nov 23, 2023
Figure 1 for Exploring Methods for Cross-lingual Text Style Transfer: The Case of Text Detoxification
Figure 2 for Exploring Methods for Cross-lingual Text Style Transfer: The Case of Text Detoxification
Figure 3 for Exploring Methods for Cross-lingual Text Style Transfer: The Case of Text Detoxification
Figure 4 for Exploring Methods for Cross-lingual Text Style Transfer: The Case of Text Detoxification
Viaarxiv icon