Alert button
Picture for Zhe Gan

Zhe Gan

Alert button

The Elastic Lottery Ticket Hypothesis

Add code
Bookmark button
Alert button
Mar 30, 2021
Xiaohan Chen, Yu Cheng, Shuohang Wang, Zhe Gan, Jingjing Liu, Zhangyang Wang

Figure 1 for The Elastic Lottery Ticket Hypothesis
Figure 2 for The Elastic Lottery Ticket Hypothesis
Figure 3 for The Elastic Lottery Ticket Hypothesis
Figure 4 for The Elastic Lottery Ticket Hypothesis
Viaarxiv icon

Adversarial Feature Augmentation and Normalization for Visual Recognition

Add code
Bookmark button
Alert button
Mar 22, 2021
Tianlong Chen, Yu Cheng, Zhe Gan, Jianfeng Wang, Lijuan Wang, Zhangyang Wang, Jingjing Liu

Figure 1 for Adversarial Feature Augmentation and Normalization for Visual Recognition
Figure 2 for Adversarial Feature Augmentation and Normalization for Visual Recognition
Figure 3 for Adversarial Feature Augmentation and Normalization for Visual Recognition
Figure 4 for Adversarial Feature Augmentation and Normalization for Visual Recognition
Viaarxiv icon

Improving Zero-shot Voice Style Transfer via Disentangled Representation Learning

Add code
Bookmark button
Alert button
Mar 17, 2021
Siyang Yuan, Pengyu Cheng, Ruiyi Zhang, Weituo Hao, Zhe Gan, Lawrence Carin

Figure 1 for Improving Zero-shot Voice Style Transfer via Disentangled Representation Learning
Figure 2 for Improving Zero-shot Voice Style Transfer via Disentangled Representation Learning
Figure 3 for Improving Zero-shot Voice Style Transfer via Disentangled Representation Learning
Figure 4 for Improving Zero-shot Voice Style Transfer via Disentangled Representation Learning
Viaarxiv icon

Ultra-Data-Efficient GAN Training: Drawing A Lottery Ticket First, Then Training It Toughly

Add code
Bookmark button
Alert button
Feb 28, 2021
Tianlong Chen, Yu Cheng, Zhe Gan, Jingjing Liu, Zhangyang Wang

Figure 1 for Ultra-Data-Efficient GAN Training: Drawing A Lottery Ticket First, Then Training It Toughly
Figure 2 for Ultra-Data-Efficient GAN Training: Drawing A Lottery Ticket First, Then Training It Toughly
Figure 3 for Ultra-Data-Efficient GAN Training: Drawing A Lottery Ticket First, Then Training It Toughly
Figure 4 for Ultra-Data-Efficient GAN Training: Drawing A Lottery Ticket First, Then Training It Toughly
Viaarxiv icon

Less is More: ClipBERT for Video-and-Language Learning via Sparse Sampling

Add code
Bookmark button
Alert button
Feb 11, 2021
Jie Lei, Linjie Li, Luowei Zhou, Zhe Gan, Tamara L. Berg, Mohit Bansal, Jingjing Liu

Figure 1 for Less is More: ClipBERT for Video-and-Language Learning via Sparse Sampling
Figure 2 for Less is More: ClipBERT for Video-and-Language Learning via Sparse Sampling
Figure 3 for Less is More: ClipBERT for Video-and-Language Learning via Sparse Sampling
Figure 4 for Less is More: ClipBERT for Video-and-Language Learning via Sparse Sampling
Viaarxiv icon

EarlyBERT: Efficient BERT Training via Early-bird Lottery Tickets

Add code
Bookmark button
Alert button
Dec 31, 2020
Xiaohan Chen, Yu Cheng, Shuohang Wang, Zhe Gan, Zhangyang Wang, Jingjing Liu

Figure 1 for EarlyBERT: Efficient BERT Training via Early-bird Lottery Tickets
Figure 2 for EarlyBERT: Efficient BERT Training via Early-bird Lottery Tickets
Figure 3 for EarlyBERT: Efficient BERT Training via Early-bird Lottery Tickets
Figure 4 for EarlyBERT: Efficient BERT Training via Early-bird Lottery Tickets
Viaarxiv icon

Wasserstein Contrastive Representation Distillation

Add code
Bookmark button
Alert button
Dec 15, 2020
Liqun Chen, Zhe Gan, Dong Wang, Jingjing Liu, Ricardo Henao, Lawrence Carin

Figure 1 for Wasserstein Contrastive Representation Distillation
Figure 2 for Wasserstein Contrastive Representation Distillation
Figure 3 for Wasserstein Contrastive Representation Distillation
Figure 4 for Wasserstein Contrastive Representation Distillation
Viaarxiv icon

A Closer Look at the Robustness of Vision-and-Language Pre-trained Models

Add code
Bookmark button
Alert button
Dec 15, 2020
Linjie Li, Zhe Gan, Jingjing Liu

Figure 1 for A Closer Look at the Robustness of Vision-and-Language Pre-trained Models
Figure 2 for A Closer Look at the Robustness of Vision-and-Language Pre-trained Models
Figure 3 for A Closer Look at the Robustness of Vision-and-Language Pre-trained Models
Figure 4 for A Closer Look at the Robustness of Vision-and-Language Pre-trained Models
Viaarxiv icon

InfoBERT: Improving Robustness of Language Models from An Information Theoretic Perspective

Add code
Bookmark button
Alert button
Oct 14, 2020
Boxin Wang, Shuohang Wang, Yu Cheng, Zhe Gan, Ruoxi Jia, Bo Li, Jingjing Liu

Figure 1 for InfoBERT: Improving Robustness of Language Models from An Information Theoretic Perspective
Figure 2 for InfoBERT: Improving Robustness of Language Models from An Information Theoretic Perspective
Figure 3 for InfoBERT: Improving Robustness of Language Models from An Information Theoretic Perspective
Figure 4 for InfoBERT: Improving Robustness of Language Models from An Information Theoretic Perspective
Viaarxiv icon

Cross-Thought for Sentence Encoder Pre-training

Add code
Bookmark button
Alert button
Oct 07, 2020
Shuohang Wang, Yuwei Fang, Siqi Sun, Zhe Gan, Yu Cheng, Jing Jiang, Jingjing Liu

Figure 1 for Cross-Thought for Sentence Encoder Pre-training
Figure 2 for Cross-Thought for Sentence Encoder Pre-training
Figure 3 for Cross-Thought for Sentence Encoder Pre-training
Figure 4 for Cross-Thought for Sentence Encoder Pre-training
Viaarxiv icon