Picture for Zongxin Yang

Zongxin Yang

Noise-Tolerant Hybrid Prototypical Learning with Noisy Web Data

Add code
Jan 05, 2025
Figure 1 for Noise-Tolerant Hybrid Prototypical Learning with Noisy Web Data
Figure 2 for Noise-Tolerant Hybrid Prototypical Learning with Noisy Web Data
Figure 3 for Noise-Tolerant Hybrid Prototypical Learning with Noisy Web Data
Figure 4 for Noise-Tolerant Hybrid Prototypical Learning with Noisy Web Data
Viaarxiv icon

Generalizable Origin Identification for Text-Guided Image-to-Image Diffusion Models

Add code
Jan 04, 2025
Figure 1 for Generalizable Origin Identification for Text-Guided Image-to-Image Diffusion Models
Viaarxiv icon

Collaborative Hybrid Propagator for Temporal Misalignment in Audio-Visual Segmentation

Add code
Dec 11, 2024
Viaarxiv icon

3DIS: Depth-Driven Decoupled Instance Synthesis for Text-to-Image Generation

Add code
Oct 16, 2024
Figure 1 for 3DIS: Depth-Driven Decoupled Instance Synthesis for Text-to-Image Generation
Figure 2 for 3DIS: Depth-Driven Decoupled Instance Synthesis for Text-to-Image Generation
Figure 3 for 3DIS: Depth-Driven Decoupled Instance Synthesis for Text-to-Image Generation
Figure 4 for 3DIS: Depth-Driven Decoupled Instance Synthesis for Text-to-Image Generation
Viaarxiv icon

MIGC++: Advanced Multi-Instance Generation Controller for Image Synthesis

Add code
Jul 02, 2024
Figure 1 for MIGC++: Advanced Multi-Instance Generation Controller for Image Synthesis
Figure 2 for MIGC++: Advanced Multi-Instance Generation Controller for Image Synthesis
Figure 3 for MIGC++: Advanced Multi-Instance Generation Controller for Image Synthesis
Figure 4 for MIGC++: Advanced Multi-Instance Generation Controller for Image Synthesis
Viaarxiv icon

MIGC: Multi-Instance Generation Controller for Text-to-Image Synthesis

Add code
Feb 08, 2024
Figure 1 for MIGC: Multi-Instance Generation Controller for Text-to-Image Synthesis
Figure 2 for MIGC: Multi-Instance Generation Controller for Text-to-Image Synthesis
Figure 3 for MIGC: Multi-Instance Generation Controller for Text-to-Image Synthesis
Figure 4 for MIGC: Multi-Instance Generation Controller for Text-to-Image Synthesis
Viaarxiv icon

Explore Synergistic Interaction Across Frames for Interactive Video Object Segmentation

Add code
Feb 04, 2024
Figure 1 for Explore Synergistic Interaction Across Frames for Interactive Video Object Segmentation
Figure 2 for Explore Synergistic Interaction Across Frames for Interactive Video Object Segmentation
Figure 3 for Explore Synergistic Interaction Across Frames for Interactive Video Object Segmentation
Figure 4 for Explore Synergistic Interaction Across Frames for Interactive Video Object Segmentation
Viaarxiv icon

DoraemonGPT: Toward Understanding Dynamic Scenes with Large Language Models

Add code
Jan 16, 2024
Figure 1 for DoraemonGPT: Toward Understanding Dynamic Scenes with Large Language Models
Figure 2 for DoraemonGPT: Toward Understanding Dynamic Scenes with Large Language Models
Figure 3 for DoraemonGPT: Toward Understanding Dynamic Scenes with Large Language Models
Figure 4 for DoraemonGPT: Toward Understanding Dynamic Scenes with Large Language Models
Viaarxiv icon

Controllable 3D Face Generation with Conditional Style Code Diffusion

Add code
Jan 11, 2024
Figure 1 for Controllable 3D Face Generation with Conditional Style Code Diffusion
Figure 2 for Controllable 3D Face Generation with Conditional Style Code Diffusion
Figure 3 for Controllable 3D Face Generation with Conditional Style Code Diffusion
Figure 4 for Controllable 3D Face Generation with Conditional Style Code Diffusion
Viaarxiv icon

GD^2-NeRF: Generative Detail Compensation via GAN and Diffusion for One-shot Generalizable Neural Radiance Fields

Add code
Jan 02, 2024
Figure 1 for GD^2-NeRF: Generative Detail Compensation via GAN and Diffusion for One-shot Generalizable Neural Radiance Fields
Figure 2 for GD^2-NeRF: Generative Detail Compensation via GAN and Diffusion for One-shot Generalizable Neural Radiance Fields
Figure 3 for GD^2-NeRF: Generative Detail Compensation via GAN and Diffusion for One-shot Generalizable Neural Radiance Fields
Figure 4 for GD^2-NeRF: Generative Detail Compensation via GAN and Diffusion for One-shot Generalizable Neural Radiance Fields
Viaarxiv icon