Picture for Huan Liu

Huan Liu

Exploiting Class Probabilities for Black-box Sentence-level Attacks

Add code
Feb 05, 2024
Figure 1 for Exploiting Class Probabilities for Black-box Sentence-level Attacks
Figure 2 for Exploiting Class Probabilities for Black-box Sentence-level Attacks
Figure 3 for Exploiting Class Probabilities for Black-box Sentence-level Attacks
Figure 4 for Exploiting Class Probabilities for Black-box Sentence-level Attacks
Viaarxiv icon

A Generative Approach to Surrogate-based Black-box Attacks

Add code
Feb 05, 2024
Viaarxiv icon

Causal Feature Selection for Responsible Machine Learning

Add code
Feb 05, 2024
Viaarxiv icon

Adversarial Text Purification: A Large Language Model Approach for Defense

Add code
Feb 05, 2024
Figure 1 for Adversarial Text Purification: A Large Language Model Approach for Defense
Figure 2 for Adversarial Text Purification: A Large Language Model Approach for Defense
Figure 3 for Adversarial Text Purification: A Large Language Model Approach for Defense
Figure 4 for Adversarial Text Purification: A Large Language Model Approach for Defense
Viaarxiv icon

Test-Time Personalization with Meta Prompt for Gaze Estimation

Add code
Jan 03, 2024
Figure 1 for Test-Time Personalization with Meta Prompt for Gaze Estimation
Figure 2 for Test-Time Personalization with Meta Prompt for Gaze Estimation
Figure 3 for Test-Time Personalization with Meta Prompt for Gaze Estimation
Figure 4 for Test-Time Personalization with Meta Prompt for Gaze Estimation
Viaarxiv icon

Forgery-aware Adaptive Transformer for Generalizable Synthetic Image Detection

Add code
Dec 27, 2023
Figure 1 for Forgery-aware Adaptive Transformer for Generalizable Synthetic Image Detection
Figure 2 for Forgery-aware Adaptive Transformer for Generalizable Synthetic Image Detection
Figure 3 for Forgery-aware Adaptive Transformer for Generalizable Synthetic Image Detection
Figure 4 for Forgery-aware Adaptive Transformer for Generalizable Synthetic Image Detection
Viaarxiv icon

Sparsity-Guided Holistic Explanation for LLMs with Interpretable Inference-Time Intervention

Add code
Dec 22, 2023
Figure 1 for Sparsity-Guided Holistic Explanation for LLMs with Interpretable Inference-Time Intervention
Figure 2 for Sparsity-Guided Holistic Explanation for LLMs with Interpretable Inference-Time Intervention
Figure 3 for Sparsity-Guided Holistic Explanation for LLMs with Interpretable Inference-Time Intervention
Figure 4 for Sparsity-Guided Holistic Explanation for LLMs with Interpretable Inference-Time Intervention
Viaarxiv icon

Rethinking the Up-Sampling Operations in CNN-based Generative Network for Generalizable Deepfake Detection

Add code
Dec 20, 2023
Viaarxiv icon

CSGNN: Conquering Noisy Node labels via Dynamic Class-wise Selection

Add code
Nov 20, 2023
Figure 1 for CSGNN: Conquering Noisy Node labels via Dynamic Class-wise Selection
Figure 2 for CSGNN: Conquering Noisy Node labels via Dynamic Class-wise Selection
Figure 3 for CSGNN: Conquering Noisy Node labels via Dynamic Class-wise Selection
Figure 4 for CSGNN: Conquering Noisy Node labels via Dynamic Class-wise Selection
Viaarxiv icon

Can Knowledge Graphs Reduce Hallucinations in LLMs? : A Survey

Add code
Nov 14, 2023
Viaarxiv icon