Picture for Jay Zhangjie Wu

Jay Zhangjie Wu

Video-Language Understanding: A Survey from Model Architecture, Model Training, and Data Perspectives

Add code
Jun 09, 2024
Viaarxiv icon

Towards A Better Metric for Text-to-Video Generation

Add code
Jan 15, 2024
Figure 1 for Towards A Better Metric for Text-to-Video Generation
Figure 2 for Towards A Better Metric for Text-to-Video Generation
Figure 3 for Towards A Better Metric for Text-to-Video Generation
Figure 4 for Towards A Better Metric for Text-to-Video Generation
Viaarxiv icon

VideoSwap: Customized Video Subject Swapping with Interactive Semantic Point Correspondence

Add code
Dec 05, 2023
Viaarxiv icon

CVPR 2023 Text Guided Video Editing Competition

Oct 24, 2023
Figure 1 for CVPR 2023 Text Guided Video Editing Competition
Figure 2 for CVPR 2023 Text Guided Video Editing Competition
Figure 3 for CVPR 2023 Text Guided Video Editing Competition
Figure 4 for CVPR 2023 Text Guided Video Editing Competition
Viaarxiv icon

DynVideo-E: Harnessing Dynamic NeRF for Large-Scale Motion- and View-Change Human-Centric Video Editing

Add code
Oct 16, 2023
Figure 1 for DynVideo-E: Harnessing Dynamic NeRF for Large-Scale Motion- and View-Change Human-Centric Video Editing
Figure 2 for DynVideo-E: Harnessing Dynamic NeRF for Large-Scale Motion- and View-Change Human-Centric Video Editing
Figure 3 for DynVideo-E: Harnessing Dynamic NeRF for Large-Scale Motion- and View-Change Human-Centric Video Editing
Figure 4 for DynVideo-E: Harnessing Dynamic NeRF for Large-Scale Motion- and View-Change Human-Centric Video Editing
Viaarxiv icon

MotionDirector: Motion Customization of Text-to-Video Diffusion Models

Add code
Oct 12, 2023
Figure 1 for MotionDirector: Motion Customization of Text-to-Video Diffusion Models
Figure 2 for MotionDirector: Motion Customization of Text-to-Video Diffusion Models
Figure 3 for MotionDirector: Motion Customization of Text-to-Video Diffusion Models
Figure 4 for MotionDirector: Motion Customization of Text-to-Video Diffusion Models
Viaarxiv icon

Show-1: Marrying Pixel and Latent Diffusion Models for Text-to-Video Generation

Add code
Sep 27, 2023
Figure 1 for Show-1: Marrying Pixel and Latent Diffusion Models for Text-to-Video Generation
Figure 2 for Show-1: Marrying Pixel and Latent Diffusion Models for Text-to-Video Generation
Figure 3 for Show-1: Marrying Pixel and Latent Diffusion Models for Text-to-Video Generation
Figure 4 for Show-1: Marrying Pixel and Latent Diffusion Models for Text-to-Video Generation
Viaarxiv icon

Mix-of-Show: Decentralized Low-Rank Adaptation for Multi-Concept Customization of Diffusion Models

Add code
May 29, 2023
Figure 1 for Mix-of-Show: Decentralized Low-Rank Adaptation for Multi-Concept Customization of Diffusion Models
Figure 2 for Mix-of-Show: Decentralized Low-Rank Adaptation for Multi-Concept Customization of Diffusion Models
Figure 3 for Mix-of-Show: Decentralized Low-Rank Adaptation for Multi-Concept Customization of Diffusion Models
Figure 4 for Mix-of-Show: Decentralized Low-Rank Adaptation for Multi-Concept Customization of Diffusion Models
Viaarxiv icon

Tune-A-Video: One-Shot Tuning of Image Diffusion Models for Text-to-Video Generation

Add code
Dec 22, 2022
Figure 1 for Tune-A-Video: One-Shot Tuning of Image Diffusion Models for Text-to-Video Generation
Figure 2 for Tune-A-Video: One-Shot Tuning of Image Diffusion Models for Text-to-Video Generation
Figure 3 for Tune-A-Video: One-Shot Tuning of Image Diffusion Models for Text-to-Video Generation
Figure 4 for Tune-A-Video: One-Shot Tuning of Image Diffusion Models for Text-to-Video Generation
Viaarxiv icon

Symbolic Replay: Scene Graph as Prompt for Continual Learning on VQA Task

Add code
Aug 29, 2022
Figure 1 for Symbolic Replay: Scene Graph as Prompt for Continual Learning on VQA Task
Figure 2 for Symbolic Replay: Scene Graph as Prompt for Continual Learning on VQA Task
Figure 3 for Symbolic Replay: Scene Graph as Prompt for Continual Learning on VQA Task
Figure 4 for Symbolic Replay: Scene Graph as Prompt for Continual Learning on VQA Task
Viaarxiv icon