Alert button
Picture for Yonglong Tian

Yonglong Tian

Alert button

Self-Correcting Self-Consuming Loops for Generative Model Training

Add code
Bookmark button
Alert button
Feb 11, 2024
Nate Gillman, Michael Freeman, Daksh Aggarwal, Chia-Hong Hsu, Calvin Luo, Yonglong Tian, Chen Sun

Viaarxiv icon

Denoising Vision Transformers

Add code
Bookmark button
Alert button
Jan 05, 2024
Jiawei Yang, Katie Z Luo, Jiefeng Li, Kilian Q Weinberger, Yonglong Tian, Yue Wang

Viaarxiv icon

Learning Vision from Models Rivals Learning Vision from Data

Add code
Bookmark button
Alert button
Dec 28, 2023
Yonglong Tian, Lijie Fan, Kaifeng Chen, Dina Katabi, Dilip Krishnan, Phillip Isola

Viaarxiv icon

Scaling Laws of Synthetic Images for Model Training ... for Now

Add code
Bookmark button
Alert button
Dec 07, 2023
Lijie Fan, Kaifeng Chen, Dilip Krishnan, Dina Katabi, Phillip Isola, Yonglong Tian

Viaarxiv icon

Leveraging Unpaired Data for Vision-Language Generative Models via Cycle Consistency

Add code
Bookmark button
Alert button
Oct 05, 2023
Tianhong Li, Sangnie Bhardwaj, Yonglong Tian, Han Zhang, Jarred Barber, Dina Katabi, Guillaume Lajoie, Huiwen Chang, Dilip Krishnan

Figure 1 for Leveraging Unpaired Data for Vision-Language Generative Models via Cycle Consistency
Figure 2 for Leveraging Unpaired Data for Vision-Language Generative Models via Cycle Consistency
Figure 3 for Leveraging Unpaired Data for Vision-Language Generative Models via Cycle Consistency
Figure 4 for Leveraging Unpaired Data for Vision-Language Generative Models via Cycle Consistency
Viaarxiv icon

Restart Sampling for Improving Generative Processes

Add code
Bookmark button
Alert button
Jun 26, 2023
Yilun Xu, Mingyang Deng, Xiang Cheng, Yonglong Tian, Ziming Liu, Tommi Jaakkola

Figure 1 for Restart Sampling for Improving Generative Processes
Figure 2 for Restart Sampling for Improving Generative Processes
Figure 3 for Restart Sampling for Improving Generative Processes
Figure 4 for Restart Sampling for Improving Generative Processes
Viaarxiv icon

StableRep: Synthetic Images from Text-to-Image Models Make Strong Visual Representation Learners

Add code
Bookmark button
Alert button
Jun 01, 2023
Yonglong Tian, Lijie Fan, Phillip Isola, Huiwen Chang, Dilip Krishnan

Figure 1 for StableRep: Synthetic Images from Text-to-Image Models Make Strong Visual Representation Learners
Figure 2 for StableRep: Synthetic Images from Text-to-Image Models Make Strong Visual Representation Learners
Figure 3 for StableRep: Synthetic Images from Text-to-Image Models Make Strong Visual Representation Learners
Figure 4 for StableRep: Synthetic Images from Text-to-Image Models Make Strong Visual Representation Learners
Viaarxiv icon

Improving CLIP Training with Language Rewrites

Add code
Bookmark button
Alert button
May 31, 2023
Lijie Fan, Dilip Krishnan, Phillip Isola, Dina Katabi, Yonglong Tian

Figure 1 for Improving CLIP Training with Language Rewrites
Figure 2 for Improving CLIP Training with Language Rewrites
Figure 3 for Improving CLIP Training with Language Rewrites
Figure 4 for Improving CLIP Training with Language Rewrites
Viaarxiv icon

PFGM++: Unlocking the Potential of Physics-Inspired Generative Models

Add code
Bookmark button
Alert button
Feb 10, 2023
Yilun Xu, Ziming Liu, Yonglong Tian, Shangyuan Tong, Max Tegmark, Tommi Jaakkola

Figure 1 for PFGM++: Unlocking the Potential of Physics-Inspired Generative Models
Figure 2 for PFGM++: Unlocking the Potential of Physics-Inspired Generative Models
Figure 3 for PFGM++: Unlocking the Potential of Physics-Inspired Generative Models
Figure 4 for PFGM++: Unlocking the Potential of Physics-Inspired Generative Models
Viaarxiv icon