Alert button
Picture for Harry Yang

Harry Yang

Alert button

AnyV2V: A Plug-and-Play Framework For Any Video-to-Video Editing Tasks

Add code
Bookmark button
Alert button
Mar 22, 2024
Max Ku, Cong Wei, Weiming Ren, Harry Yang, Wenhu Chen

Figure 1 for AnyV2V: A Plug-and-Play Framework For Any Video-to-Video Editing Tasks
Figure 2 for AnyV2V: A Plug-and-Play Framework For Any Video-to-Video Editing Tasks
Figure 3 for AnyV2V: A Plug-and-Play Framework For Any Video-to-Video Editing Tasks
Figure 4 for AnyV2V: A Plug-and-Play Framework For Any Video-to-Video Editing Tasks
Viaarxiv icon

ConsistI2V: Enhancing Visual Consistency for Image-to-Video Generation

Add code
Bookmark button
Alert button
Feb 06, 2024
Weiming Ren, Harry Yang, Ge Zhang, Cong Wei, Xinrun Du, Stephen Huang, Wenhu Chen

Viaarxiv icon

Latent-Shift: Latent Diffusion with Temporal Shift for Efficient Text-to-Video Generation

Add code
Bookmark button
Alert button
Apr 18, 2023
Jie An, Songyang Zhang, Harry Yang, Sonal Gupta, Jia-Bin Huang, Jiebo Luo, Xi Yin

Figure 1 for Latent-Shift: Latent Diffusion with Temporal Shift for Efficient Text-to-Video Generation
Figure 2 for Latent-Shift: Latent Diffusion with Temporal Shift for Efficient Text-to-Video Generation
Figure 3 for Latent-Shift: Latent Diffusion with Temporal Shift for Efficient Text-to-Video Generation
Figure 4 for Latent-Shift: Latent Diffusion with Temporal Shift for Efficient Text-to-Video Generation
Viaarxiv icon

Make-A-Video: Text-to-Video Generation without Text-Video Data

Add code
Bookmark button
Alert button
Sep 29, 2022
Uriel Singer, Adam Polyak, Thomas Hayes, Xi Yin, Jie An, Songyang Zhang, Qiyuan Hu, Harry Yang, Oron Ashual, Oran Gafni, Devi Parikh, Sonal Gupta, Yaniv Taigman

Figure 1 for Make-A-Video: Text-to-Video Generation without Text-Video Data
Figure 2 for Make-A-Video: Text-to-Video Generation without Text-Video Data
Figure 3 for Make-A-Video: Text-to-Video Generation without Text-Video Data
Figure 4 for Make-A-Video: Text-to-Video Generation without Text-Video Data
Viaarxiv icon

RegMixup: Mixup as a Regularizer Can Surprisingly Improve Accuracy and Out Distribution Robustness

Add code
Bookmark button
Alert button
Jun 29, 2022
Francesco Pinto, Harry Yang, Ser-Nam Lim, Philip H. S. Torr, Puneet K. Dokania

Figure 1 for RegMixup: Mixup as a Regularizer Can Surprisingly Improve Accuracy and Out Distribution Robustness
Figure 2 for RegMixup: Mixup as a Regularizer Can Surprisingly Improve Accuracy and Out Distribution Robustness
Figure 3 for RegMixup: Mixup as a Regularizer Can Surprisingly Improve Accuracy and Out Distribution Robustness
Figure 4 for RegMixup: Mixup as a Regularizer Can Surprisingly Improve Accuracy and Out Distribution Robustness
Viaarxiv icon

MUGEN: A Playground for Video-Audio-Text Multimodal Understanding and GENeration

Add code
Bookmark button
Alert button
Apr 28, 2022
Thomas Hayes, Songyang Zhang, Xi Yin, Guan Pang, Sasha Sheng, Harry Yang, Songwei Ge, Qiyuan Hu, Devi Parikh

Figure 1 for MUGEN: A Playground for Video-Audio-Text Multimodal Understanding and GENeration
Figure 2 for MUGEN: A Playground for Video-Audio-Text Multimodal Understanding and GENeration
Figure 3 for MUGEN: A Playground for Video-Audio-Text Multimodal Understanding and GENeration
Figure 4 for MUGEN: A Playground for Video-Audio-Text Multimodal Understanding and GENeration
Viaarxiv icon

Long Video Generation with Time-Agnostic VQGAN and Time-Sensitive Transformer

Add code
Bookmark button
Alert button
Apr 07, 2022
Songwei Ge, Thomas Hayes, Harry Yang, Xi Yin, Guan Pang, David Jacobs, Jia-Bin Huang, Devi Parikh

Figure 1 for Long Video Generation with Time-Agnostic VQGAN and Time-Sensitive Transformer
Figure 2 for Long Video Generation with Time-Agnostic VQGAN and Time-Sensitive Transformer
Figure 3 for Long Video Generation with Time-Agnostic VQGAN and Time-Sensitive Transformer
Figure 4 for Long Video Generation with Time-Agnostic VQGAN and Time-Sensitive Transformer
Viaarxiv icon

Robustness and Generalization via Generative Adversarial Training

Add code
Bookmark button
Alert button
Sep 06, 2021
Omid Poursaeed, Tianxing Jiang, Harry Yang, Serge Belongie, SerNam Lim

Figure 1 for Robustness and Generalization via Generative Adversarial Training
Figure 2 for Robustness and Generalization via Generative Adversarial Training
Figure 3 for Robustness and Generalization via Generative Adversarial Training
Figure 4 for Robustness and Generalization via Generative Adversarial Training
Viaarxiv icon

Fine-grained Synthesis of Unrestricted Adversarial Examples

Add code
Bookmark button
Alert button
Nov 20, 2019
Omid Poursaeed, Tianxing Jiang, Harry Yang, Serge Belongie, Ser-Nam Lim

Figure 1 for Fine-grained Synthesis of Unrestricted Adversarial Examples
Figure 2 for Fine-grained Synthesis of Unrestricted Adversarial Examples
Figure 3 for Fine-grained Synthesis of Unrestricted Adversarial Examples
Figure 4 for Fine-grained Synthesis of Unrestricted Adversarial Examples
Viaarxiv icon