Picture for Xin Yi

Xin Yi

AGMark: Attention-Guided Dynamic Watermarking for Large Vision-Language Models

Add code
Feb 10, 2026
Viaarxiv icon

Unified Defense for Large Language Models against Jailbreak and Fine-Tuning Attacks in Education

Add code
Nov 18, 2025
Figure 1 for Unified Defense for Large Language Models against Jailbreak and Fine-Tuning Attacks in Education
Figure 2 for Unified Defense for Large Language Models against Jailbreak and Fine-Tuning Attacks in Education
Figure 3 for Unified Defense for Large Language Models against Jailbreak and Fine-Tuning Attacks in Education
Figure 4 for Unified Defense for Large Language Models against Jailbreak and Fine-Tuning Attacks in Education
Viaarxiv icon

Generating Synthetic Contrast-Enhanced Chest CT Images from Non-Contrast Scans Using Slice-Consistent Brownian Bridge Diffusion Network

Add code
Aug 23, 2025
Figure 1 for Generating Synthetic Contrast-Enhanced Chest CT Images from Non-Contrast Scans Using Slice-Consistent Brownian Bridge Diffusion Network
Figure 2 for Generating Synthetic Contrast-Enhanced Chest CT Images from Non-Contrast Scans Using Slice-Consistent Brownian Bridge Diffusion Network
Figure 3 for Generating Synthetic Contrast-Enhanced Chest CT Images from Non-Contrast Scans Using Slice-Consistent Brownian Bridge Diffusion Network
Figure 4 for Generating Synthetic Contrast-Enhanced Chest CT Images from Non-Contrast Scans Using Slice-Consistent Brownian Bridge Diffusion Network
Viaarxiv icon

Hierarchical Safety Realignment: Lightweight Restoration of Safety in Pruned Large Vision-Language Models

Add code
May 22, 2025
Viaarxiv icon

Unified Attacks to Large Language Model Watermarks: Spoofing and Scrubbing in Unauthorized Knowledge Distillation

Add code
Apr 24, 2025
Viaarxiv icon

Exploring Reliable PPG Authentication on Smartwatches in Daily Scenarios

Add code
Mar 31, 2025
Figure 1 for Exploring Reliable PPG Authentication on Smartwatches in Daily Scenarios
Figure 2 for Exploring Reliable PPG Authentication on Smartwatches in Daily Scenarios
Figure 3 for Exploring Reliable PPG Authentication on Smartwatches in Daily Scenarios
Figure 4 for Exploring Reliable PPG Authentication on Smartwatches in Daily Scenarios
Viaarxiv icon

Latent-space adversarial training with post-aware calibration for defending large language models against jailbreak attacks

Add code
Jan 18, 2025
Figure 1 for Latent-space adversarial training with post-aware calibration for defending large language models against jailbreak attacks
Figure 2 for Latent-space adversarial training with post-aware calibration for defending large language models against jailbreak attacks
Figure 3 for Latent-space adversarial training with post-aware calibration for defending large language models against jailbreak attacks
Figure 4 for Latent-space adversarial training with post-aware calibration for defending large language models against jailbreak attacks
Viaarxiv icon

NLSR: Neuron-Level Safety Realignment of Large Language Models Against Harmful Fine-Tuning

Add code
Dec 17, 2024
Viaarxiv icon

A safety realignment framework via subspace-oriented model fusion for large language models

Add code
May 15, 2024
Figure 1 for A safety realignment framework via subspace-oriented model fusion for large language models
Figure 2 for A safety realignment framework via subspace-oriented model fusion for large language models
Figure 3 for A safety realignment framework via subspace-oriented model fusion for large language models
Figure 4 for A safety realignment framework via subspace-oriented model fusion for large language models
Viaarxiv icon

Fine-Grained Detoxification via Instance-Level Prefixes for Large Language Models

Add code
Feb 26, 2024
Viaarxiv icon