Picture for Zhaoxia Yin

Zhaoxia Yin

Face-D(^2)CL: Multi-Domain Synergistic Representation with Dual Continual Learning for Facial DeepFake Detection

Add code
Apr 09, 2026
Viaarxiv icon

Tex3D: Objects as Attack Surfaces via Adversarial 3D Textures for Vision-Language-Action Models

Add code
Apr 02, 2026
Viaarxiv icon

TAME: A Trustworthy Test-Time Evolution of Agent Memory with Systematic Benchmarking

Add code
Feb 03, 2026
Viaarxiv icon

Adversarial Attacks on Medical Hyperspectral Imaging Exploiting Spectral-Spatial Dependencies and Multiscale Features

Add code
Jan 11, 2026
Viaarxiv icon

Who Can See Through You? Adversarial Shielding Against VLM-Based Attribute Inference Attacks

Add code
Dec 20, 2025
Viaarxiv icon

KG-DF: A Black-box Defense Framework against Jailbreak Attacks Based on Knowledge Graphs

Add code
Nov 09, 2025
Viaarxiv icon

Exploring the Secondary Risks of Large Language Models

Add code
Jun 14, 2025
Figure 1 for Exploring the Secondary Risks of Large Language Models
Figure 2 for Exploring the Secondary Risks of Large Language Models
Figure 3 for Exploring the Secondary Risks of Large Language Models
Figure 4 for Exploring the Secondary Risks of Large Language Models
Viaarxiv icon

FGS-Audio: Fixed-Decoder Framework for Audio Steganography with Adversarial Perturbation Generation

Add code
May 28, 2025
Figure 1 for FGS-Audio: Fixed-Decoder Framework for Audio Steganography with Adversarial Perturbation Generation
Figure 2 for FGS-Audio: Fixed-Decoder Framework for Audio Steganography with Adversarial Perturbation Generation
Figure 3 for FGS-Audio: Fixed-Decoder Framework for Audio Steganography with Adversarial Perturbation Generation
Figure 4 for FGS-Audio: Fixed-Decoder Framework for Audio Steganography with Adversarial Perturbation Generation
Viaarxiv icon

Protecting Copyright of Medical Pre-trained Language Models: Training-Free Backdoor Watermarking

Add code
Sep 14, 2024
Figure 1 for Protecting Copyright of Medical Pre-trained Language Models: Training-Free Backdoor Watermarking
Figure 2 for Protecting Copyright of Medical Pre-trained Language Models: Training-Free Backdoor Watermarking
Figure 3 for Protecting Copyright of Medical Pre-trained Language Models: Training-Free Backdoor Watermarking
Figure 4 for Protecting Copyright of Medical Pre-trained Language Models: Training-Free Backdoor Watermarking
Viaarxiv icon

A Survey of Fragile Model Watermarking

Add code
Jun 20, 2024
Figure 1 for A Survey of Fragile Model Watermarking
Figure 2 for A Survey of Fragile Model Watermarking
Figure 3 for A Survey of Fragile Model Watermarking
Figure 4 for A Survey of Fragile Model Watermarking
Viaarxiv icon