Alert button
Picture for Thai Le

Thai Le

Alert button

A Curious Case of Searching for the Correlation between Training Data and Adversarial Robustness of Transformer Textual Models

Add code
Bookmark button
Alert button
Feb 18, 2024
Cuong Dang, Dung D. Le, Thai Le

Viaarxiv icon

Generalizability of Mixture of Domain-Specific Adapters from the Lens of Signed Weight Directions and its Application to Effective Model Pruning

Add code
Bookmark button
Alert button
Feb 16, 2024
Tuc Nguyen, Thai Le

Viaarxiv icon

ALISON: Fast and Effective Stylometric Authorship Obfuscation

Add code
Bookmark button
Alert button
Feb 01, 2024
Eric Xing, Saranya Venkatraman, Thai Le, Dongwon Lee

Viaarxiv icon

Marrying Adapters and Mixup to Efficiently Enhance the Adversarial Robustness of Pre-Trained Language Models for Text Classification

Add code
Bookmark button
Alert button
Jan 18, 2024
Tuc Nguyen, Thai Le

Viaarxiv icon

A Ship of Theseus: Curious Cases of Paraphrasing in LLM-Generated Texts

Add code
Bookmark button
Alert button
Nov 14, 2023
Nafis Irtiza Tripto, Saranya Venkatraman, Dominik Macko, Robert Moro, Ivan Srba, Adaku Uchendu, Thai Le, Dongwon Lee

Viaarxiv icon

HANSEN: Human and AI Spoken Text Benchmark for Authorship Analysis

Add code
Bookmark button
Alert button
Oct 25, 2023
Nafis Irtiza Tripto, Adaku Uchendu, Thai Le, Mattia Setzu, Fosca Giannotti, Dongwon Lee

Figure 1 for HANSEN: Human and AI Spoken Text Benchmark for Authorship Analysis
Figure 2 for HANSEN: Human and AI Spoken Text Benchmark for Authorship Analysis
Figure 3 for HANSEN: Human and AI Spoken Text Benchmark for Authorship Analysis
Figure 4 for HANSEN: Human and AI Spoken Text Benchmark for Authorship Analysis
Viaarxiv icon

MULTITuDE: Large-Scale Multilingual Machine-Generated Text Detection Benchmark

Add code
Bookmark button
Alert button
Oct 20, 2023
Dominik Macko, Robert Moro, Adaku Uchendu, Jason Samuel Lucas, Michiharu Yamashita, Matúš Pikuliak, Ivan Srba, Thai Le, Dongwon Lee, Jakub Simko, Maria Bielikova

Viaarxiv icon

TopRoBERTa: Topology-Aware Authorship Attribution of Deepfake Texts

Add code
Bookmark button
Alert button
Sep 22, 2023
Adaku Uchendu, Thai Le, Dongwon Lee

Figure 1 for TopRoBERTa: Topology-Aware Authorship Attribution of Deepfake Texts
Figure 2 for TopRoBERTa: Topology-Aware Authorship Attribution of Deepfake Texts
Figure 3 for TopRoBERTa: Topology-Aware Authorship Attribution of Deepfake Texts
Figure 4 for TopRoBERTa: Topology-Aware Authorship Attribution of Deepfake Texts
Viaarxiv icon

Are Your Explanations Reliable? Investigating the Stability of LIME in Explaining Textual Classification Models via Adversarial Perturbation

Add code
Bookmark button
Alert button
May 21, 2023
Christopher Burger, Lingwei Chen, Thai Le

Figure 1 for Are Your Explanations Reliable? Investigating the Stability of LIME in Explaining Textual Classification Models via Adversarial Perturbation
Figure 2 for Are Your Explanations Reliable? Investigating the Stability of LIME in Explaining Textual Classification Models via Adversarial Perturbation
Figure 3 for Are Your Explanations Reliable? Investigating the Stability of LIME in Explaining Textual Classification Models via Adversarial Perturbation
Figure 4 for Are Your Explanations Reliable? Investigating the Stability of LIME in Explaining Textual Classification Models via Adversarial Perturbation
Viaarxiv icon

Understanding Individual and Team-based Human Factors in Detecting Deepfake Texts

Add code
Bookmark button
Alert button
Apr 03, 2023
Adaku Uchendu, Jooyoung Lee, Hua Shen, Thai Le, Ting-Hao 'Kenneth' Huang, Dongwon Lee

Figure 1 for Understanding Individual and Team-based Human Factors in Detecting Deepfake Texts
Figure 2 for Understanding Individual and Team-based Human Factors in Detecting Deepfake Texts
Figure 3 for Understanding Individual and Team-based Human Factors in Detecting Deepfake Texts
Figure 4 for Understanding Individual and Team-based Human Factors in Detecting Deepfake Texts
Viaarxiv icon