Picture for Xiaomeng Fu

Xiaomeng Fu

Unveiling Structural Memorization: Structural Membership Inference Attack for Text-to-Image Diffusion Models

Add code
Jul 18, 2024
Viaarxiv icon

Explicit Correlation Learning for Generalizable Cross-Modal Deepfake Detection

Add code
Apr 30, 2024
Figure 1 for Explicit Correlation Learning for Generalizable Cross-Modal Deepfake Detection
Figure 2 for Explicit Correlation Learning for Generalizable Cross-Modal Deepfake Detection
Figure 3 for Explicit Correlation Learning for Generalizable Cross-Modal Deepfake Detection
Figure 4 for Explicit Correlation Learning for Generalizable Cross-Modal Deepfake Detection
Viaarxiv icon

Model Will Tell: Training Membership Inference for Diffusion Models

Add code
Mar 13, 2024
Figure 1 for Model Will Tell: Training Membership Inference for Diffusion Models
Figure 2 for Model Will Tell: Training Membership Inference for Diffusion Models
Figure 3 for Model Will Tell: Training Membership Inference for Diffusion Models
Figure 4 for Model Will Tell: Training Membership Inference for Diffusion Models
Viaarxiv icon

OSM-Net: One-to-Many One-shot Talking Head Generation with Spontaneous Head Motions

Add code
Sep 28, 2023
Figure 1 for OSM-Net: One-to-Many One-shot Talking Head Generation with Spontaneous Head Motions
Figure 2 for OSM-Net: One-to-Many One-shot Talking Head Generation with Spontaneous Head Motions
Figure 3 for OSM-Net: One-to-Many One-shot Talking Head Generation with Spontaneous Head Motions
Figure 4 for OSM-Net: One-to-Many One-shot Talking Head Generation with Spontaneous Head Motions
Viaarxiv icon

MFR-Net: Multi-faceted Responsive Listening Head Generation via Denoising Diffusion Model

Add code
Aug 31, 2023
Figure 1 for MFR-Net: Multi-faceted Responsive Listening Head Generation via Denoising Diffusion Model
Figure 2 for MFR-Net: Multi-faceted Responsive Listening Head Generation via Denoising Diffusion Model
Figure 3 for MFR-Net: Multi-faceted Responsive Listening Head Generation via Denoising Diffusion Model
Figure 4 for MFR-Net: Multi-faceted Responsive Listening Head Generation via Denoising Diffusion Model
Viaarxiv icon

FONT: Flow-guided One-shot Talking Head Generation with Natural Head Motions

Add code
Mar 31, 2023
Figure 1 for FONT: Flow-guided One-shot Talking Head Generation with Natural Head Motions
Figure 2 for FONT: Flow-guided One-shot Talking Head Generation with Natural Head Motions
Figure 3 for FONT: Flow-guided One-shot Talking Head Generation with Natural Head Motions
Figure 4 for FONT: Flow-guided One-shot Talking Head Generation with Natural Head Motions
Viaarxiv icon

OPT: One-shot Pose-Controllable Talking Head Generation

Add code
Feb 16, 2023
Figure 1 for OPT: One-shot Pose-Controllable Talking Head Generation
Figure 2 for OPT: One-shot Pose-Controllable Talking Head Generation
Figure 3 for OPT: One-shot Pose-Controllable Talking Head Generation
Figure 4 for OPT: One-shot Pose-Controllable Talking Head Generation
Viaarxiv icon