Alert button
Picture for Jingjing Liu

Jingjing Liu

Alert button

LightningDOT: Pre-training Visual-Semantic Embeddings for Real-Time Image-Text Retrieval

Add code
Bookmark button
Alert button
Mar 16, 2021
Siqi Sun, Yen-Chun Chen, Linjie Li, Shuohang Wang, Yuwei Fang, Jingjing Liu

Figure 1 for LightningDOT: Pre-training Visual-Semantic Embeddings for Real-Time Image-Text Retrieval
Figure 2 for LightningDOT: Pre-training Visual-Semantic Embeddings for Real-Time Image-Text Retrieval
Figure 3 for LightningDOT: Pre-training Visual-Semantic Embeddings for Real-Time Image-Text Retrieval
Figure 4 for LightningDOT: Pre-training Visual-Semantic Embeddings for Real-Time Image-Text Retrieval
Viaarxiv icon

Ultra-Data-Efficient GAN Training: Drawing A Lottery Ticket First, Then Training It Toughly

Add code
Bookmark button
Alert button
Feb 28, 2021
Tianlong Chen, Yu Cheng, Zhe Gan, Jingjing Liu, Zhangyang Wang

Figure 1 for Ultra-Data-Efficient GAN Training: Drawing A Lottery Ticket First, Then Training It Toughly
Figure 2 for Ultra-Data-Efficient GAN Training: Drawing A Lottery Ticket First, Then Training It Toughly
Figure 3 for Ultra-Data-Efficient GAN Training: Drawing A Lottery Ticket First, Then Training It Toughly
Figure 4 for Ultra-Data-Efficient GAN Training: Drawing A Lottery Ticket First, Then Training It Toughly
Viaarxiv icon

Less is More: ClipBERT for Video-and-Language Learning via Sparse Sampling

Add code
Bookmark button
Alert button
Feb 11, 2021
Jie Lei, Linjie Li, Luowei Zhou, Zhe Gan, Tamara L. Berg, Mohit Bansal, Jingjing Liu

Figure 1 for Less is More: ClipBERT for Video-and-Language Learning via Sparse Sampling
Figure 2 for Less is More: ClipBERT for Video-and-Language Learning via Sparse Sampling
Figure 3 for Less is More: ClipBERT for Video-and-Language Learning via Sparse Sampling
Figure 4 for Less is More: ClipBERT for Video-and-Language Learning via Sparse Sampling
Viaarxiv icon

EarlyBERT: Efficient BERT Training via Early-bird Lottery Tickets

Add code
Bookmark button
Alert button
Dec 31, 2020
Xiaohan Chen, Yu Cheng, Shuohang Wang, Zhe Gan, Zhangyang Wang, Jingjing Liu

Figure 1 for EarlyBERT: Efficient BERT Training via Early-bird Lottery Tickets
Figure 2 for EarlyBERT: Efficient BERT Training via Early-bird Lottery Tickets
Figure 3 for EarlyBERT: Efficient BERT Training via Early-bird Lottery Tickets
Figure 4 for EarlyBERT: Efficient BERT Training via Early-bird Lottery Tickets
Viaarxiv icon

Wasserstein Contrastive Representation Distillation

Add code
Bookmark button
Alert button
Dec 15, 2020
Liqun Chen, Zhe Gan, Dong Wang, Jingjing Liu, Ricardo Henao, Lawrence Carin

Figure 1 for Wasserstein Contrastive Representation Distillation
Figure 2 for Wasserstein Contrastive Representation Distillation
Figure 3 for Wasserstein Contrastive Representation Distillation
Figure 4 for Wasserstein Contrastive Representation Distillation
Viaarxiv icon

A Closer Look at the Robustness of Vision-and-Language Pre-trained Models

Add code
Bookmark button
Alert button
Dec 15, 2020
Linjie Li, Zhe Gan, Jingjing Liu

Figure 1 for A Closer Look at the Robustness of Vision-and-Language Pre-trained Models
Figure 2 for A Closer Look at the Robustness of Vision-and-Language Pre-trained Models
Figure 3 for A Closer Look at the Robustness of Vision-and-Language Pre-trained Models
Figure 4 for A Closer Look at the Robustness of Vision-and-Language Pre-trained Models
Viaarxiv icon

InfoBERT: Improving Robustness of Language Models from An Information Theoretic Perspective

Add code
Bookmark button
Alert button
Oct 14, 2020
Boxin Wang, Shuohang Wang, Yu Cheng, Zhe Gan, Ruoxi Jia, Bo Li, Jingjing Liu

Figure 1 for InfoBERT: Improving Robustness of Language Models from An Information Theoretic Perspective
Figure 2 for InfoBERT: Improving Robustness of Language Models from An Information Theoretic Perspective
Figure 3 for InfoBERT: Improving Robustness of Language Models from An Information Theoretic Perspective
Figure 4 for InfoBERT: Improving Robustness of Language Models from An Information Theoretic Perspective
Viaarxiv icon

Cross-Thought for Sentence Encoder Pre-training

Add code
Bookmark button
Alert button
Oct 07, 2020
Shuohang Wang, Yuwei Fang, Siqi Sun, Zhe Gan, Yu Cheng, Jing Jiang, Jingjing Liu

Figure 1 for Cross-Thought for Sentence Encoder Pre-training
Figure 2 for Cross-Thought for Sentence Encoder Pre-training
Figure 3 for Cross-Thought for Sentence Encoder Pre-training
Figure 4 for Cross-Thought for Sentence Encoder Pre-training
Viaarxiv icon

Multi-Fact Correction in Abstractive Text Summarization

Add code
Bookmark button
Alert button
Oct 06, 2020
Yue Dong, Shuohang Wang, Zhe Gan, Yu Cheng, Jackie Chi Kit Cheung, Jingjing Liu

Figure 1 for Multi-Fact Correction in Abstractive Text Summarization
Figure 2 for Multi-Fact Correction in Abstractive Text Summarization
Figure 3 for Multi-Fact Correction in Abstractive Text Summarization
Figure 4 for Multi-Fact Correction in Abstractive Text Summarization
Viaarxiv icon