Alert button
Picture for Shuohang Wang

Shuohang Wang

Alert button

The Elastic Lottery Ticket Hypothesis

Add code
Bookmark button
Alert button
Mar 30, 2021
Xiaohan Chen, Yu Cheng, Shuohang Wang, Zhe Gan, Jingjing Liu, Zhangyang Wang

Figure 1 for The Elastic Lottery Ticket Hypothesis
Figure 2 for The Elastic Lottery Ticket Hypothesis
Figure 3 for The Elastic Lottery Ticket Hypothesis
Figure 4 for The Elastic Lottery Ticket Hypothesis
Viaarxiv icon

LightningDOT: Pre-training Visual-Semantic Embeddings for Real-Time Image-Text Retrieval

Add code
Bookmark button
Alert button
Mar 16, 2021
Siqi Sun, Yen-Chun Chen, Linjie Li, Shuohang Wang, Yuwei Fang, Jingjing Liu

Figure 1 for LightningDOT: Pre-training Visual-Semantic Embeddings for Real-Time Image-Text Retrieval
Figure 2 for LightningDOT: Pre-training Visual-Semantic Embeddings for Real-Time Image-Text Retrieval
Figure 3 for LightningDOT: Pre-training Visual-Semantic Embeddings for Real-Time Image-Text Retrieval
Figure 4 for LightningDOT: Pre-training Visual-Semantic Embeddings for Real-Time Image-Text Retrieval
Viaarxiv icon

EarlyBERT: Efficient BERT Training via Early-bird Lottery Tickets

Add code
Bookmark button
Alert button
Dec 31, 2020
Xiaohan Chen, Yu Cheng, Shuohang Wang, Zhe Gan, Zhangyang Wang, Jingjing Liu

Figure 1 for EarlyBERT: Efficient BERT Training via Early-bird Lottery Tickets
Figure 2 for EarlyBERT: Efficient BERT Training via Early-bird Lottery Tickets
Figure 3 for EarlyBERT: Efficient BERT Training via Early-bird Lottery Tickets
Figure 4 for EarlyBERT: Efficient BERT Training via Early-bird Lottery Tickets
Viaarxiv icon

InfoBERT: Improving Robustness of Language Models from An Information Theoretic Perspective

Add code
Bookmark button
Alert button
Oct 14, 2020
Boxin Wang, Shuohang Wang, Yu Cheng, Zhe Gan, Ruoxi Jia, Bo Li, Jingjing Liu

Figure 1 for InfoBERT: Improving Robustness of Language Models from An Information Theoretic Perspective
Figure 2 for InfoBERT: Improving Robustness of Language Models from An Information Theoretic Perspective
Figure 3 for InfoBERT: Improving Robustness of Language Models from An Information Theoretic Perspective
Figure 4 for InfoBERT: Improving Robustness of Language Models from An Information Theoretic Perspective
Viaarxiv icon

Counterfactual Variable Control for Robust and Interpretable Question Answering

Add code
Bookmark button
Alert button
Oct 12, 2020
Sicheng Yu, Yulei Niu, Shuohang Wang, Jing Jiang, Qianru Sun

Figure 1 for Counterfactual Variable Control for Robust and Interpretable Question Answering
Figure 2 for Counterfactual Variable Control for Robust and Interpretable Question Answering
Figure 3 for Counterfactual Variable Control for Robust and Interpretable Question Answering
Figure 4 for Counterfactual Variable Control for Robust and Interpretable Question Answering
Viaarxiv icon

Cross-Thought for Sentence Encoder Pre-training

Add code
Bookmark button
Alert button
Oct 07, 2020
Shuohang Wang, Yuwei Fang, Siqi Sun, Zhe Gan, Yu Cheng, Jing Jiang, Jingjing Liu

Figure 1 for Cross-Thought for Sentence Encoder Pre-training
Figure 2 for Cross-Thought for Sentence Encoder Pre-training
Figure 3 for Cross-Thought for Sentence Encoder Pre-training
Figure 4 for Cross-Thought for Sentence Encoder Pre-training
Viaarxiv icon

Multi-Fact Correction in Abstractive Text Summarization

Add code
Bookmark button
Alert button
Oct 06, 2020
Yue Dong, Shuohang Wang, Zhe Gan, Yu Cheng, Jackie Chi Kit Cheung, Jingjing Liu

Figure 1 for Multi-Fact Correction in Abstractive Text Summarization
Figure 2 for Multi-Fact Correction in Abstractive Text Summarization
Figure 3 for Multi-Fact Correction in Abstractive Text Summarization
Figure 4 for Multi-Fact Correction in Abstractive Text Summarization
Viaarxiv icon

Contrastive Distillation on Intermediate Representations for Language Model Compression

Add code
Bookmark button
Alert button
Sep 29, 2020
Siqi Sun, Zhe Gan, Yu Cheng, Yuwei Fang, Shuohang Wang, Jingjing Liu

Figure 1 for Contrastive Distillation on Intermediate Representations for Language Model Compression
Figure 2 for Contrastive Distillation on Intermediate Representations for Language Model Compression
Figure 3 for Contrastive Distillation on Intermediate Representations for Language Model Compression
Figure 4 for Contrastive Distillation on Intermediate Representations for Language Model Compression
Viaarxiv icon

FILTER: An Enhanced Fusion Method for Cross-lingual Language Understanding

Add code
Bookmark button
Alert button
Sep 17, 2020
Yuwei Fang, Shuohang Wang, Zhe Gan, Siqi Sun, Jingjing Liu

Figure 1 for FILTER: An Enhanced Fusion Method for Cross-lingual Language Understanding
Figure 2 for FILTER: An Enhanced Fusion Method for Cross-lingual Language Understanding
Figure 3 for FILTER: An Enhanced Fusion Method for Cross-lingual Language Understanding
Figure 4 for FILTER: An Enhanced Fusion Method for Cross-lingual Language Understanding
Viaarxiv icon