Alert button
Picture for Li Dong

Li Dong

Alert button

Exploiting Constructive Interference for Backscatter Communication Systems

Add code
Bookmark button
Alert button
May 22, 2022
Gu Bowen, Li Dong, Liu Ye, Xu Yongjun

Figure 1 for Exploiting Constructive Interference for Backscatter Communication Systems
Figure 2 for Exploiting Constructive Interference for Backscatter Communication Systems
Figure 3 for Exploiting Constructive Interference for Backscatter Communication Systems
Figure 4 for Exploiting Constructive Interference for Backscatter Communication Systems
Viaarxiv icon

Prototypical Calibration for Few-shot Learning of Language Models

Add code
Bookmark button
Alert button
May 20, 2022
Zhixiong Han, Yaru Hao, Li Dong, Furu Wei

Figure 1 for Prototypical Calibration for Few-shot Learning of Language Models
Figure 2 for Prototypical Calibration for Few-shot Learning of Language Models
Figure 3 for Prototypical Calibration for Few-shot Learning of Language Models
Figure 4 for Prototypical Calibration for Few-shot Learning of Language Models
Viaarxiv icon

Visually-Augmented Language Modeling

Add code
Bookmark button
Alert button
May 20, 2022
Weizhi Wang, Li Dong, Hao Cheng, Haoyu Song, Xiaodong Liu, Xifeng Yan, Jianfeng Gao, Furu Wei

Figure 1 for Visually-Augmented Language Modeling
Figure 2 for Visually-Augmented Language Modeling
Figure 3 for Visually-Augmented Language Modeling
Figure 4 for Visually-Augmented Language Modeling
Viaarxiv icon

Transferability of Adversarial Attacks on Synthetic Speech Detection

Add code
Bookmark button
Alert button
May 16, 2022
Jiacheng Deng, Shunyi Chen, Li Dong, Diqun Yan, Rangding Wang

Figure 1 for Transferability of Adversarial Attacks on Synthetic Speech Detection
Figure 2 for Transferability of Adversarial Attacks on Synthetic Speech Detection
Figure 3 for Transferability of Adversarial Attacks on Synthetic Speech Detection
Figure 4 for Transferability of Adversarial Attacks on Synthetic Speech Detection
Viaarxiv icon

Many a little Makes a Mickle: Probing Backscattering Energy Recycling for Backscatter Communications

Add code
Bookmark button
Alert button
May 01, 2022
Gu Bowen, Li Dong, Xu Yongjun, Li Chunguo, Sun Sumei

Figure 1 for Many a little Makes a Mickle: Probing Backscattering Energy Recycling for Backscatter Communications
Figure 2 for Many a little Makes a Mickle: Probing Backscattering Energy Recycling for Backscatter Communications
Figure 3 for Many a little Makes a Mickle: Probing Backscattering Energy Recycling for Backscatter Communications
Figure 4 for Many a little Makes a Mickle: Probing Backscattering Energy Recycling for Backscatter Communications
Viaarxiv icon

On the Representation Collapse of Sparse Mixture of Experts

Add code
Bookmark button
Alert button
Apr 20, 2022
Zewen Chi, Li Dong, Shaohan Huang, Damai Dai, Shuming Ma, Barun Patra, Saksham Singhal, Payal Bajaj, Xia Song, Furu Wei

Figure 1 for On the Representation Collapse of Sparse Mixture of Experts
Figure 2 for On the Representation Collapse of Sparse Mixture of Experts
Figure 3 for On the Representation Collapse of Sparse Mixture of Experts
Figure 4 for On the Representation Collapse of Sparse Mixture of Experts
Viaarxiv icon

StableMoE: Stable Routing Strategy for Mixture of Experts

Add code
Bookmark button
Alert button
Apr 18, 2022
Damai Dai, Li Dong, Shuming Ma, Bo Zheng, Zhifang Sui, Baobao Chang, Furu Wei

Figure 1 for StableMoE: Stable Routing Strategy for Mixture of Experts
Figure 2 for StableMoE: Stable Routing Strategy for Mixture of Experts
Figure 3 for StableMoE: Stable Routing Strategy for Mixture of Experts
Figure 4 for StableMoE: Stable Routing Strategy for Mixture of Experts
Viaarxiv icon

CLIP Models are Few-shot Learners: Empirical Studies on VQA and Visual Entailment

Add code
Bookmark button
Alert button
Mar 14, 2022
Haoyu Song, Li Dong, Wei-Nan Zhang, Ting Liu, Furu Wei

Figure 1 for CLIP Models are Few-shot Learners: Empirical Studies on VQA and Visual Entailment
Figure 2 for CLIP Models are Few-shot Learners: Empirical Studies on VQA and Visual Entailment
Figure 3 for CLIP Models are Few-shot Learners: Empirical Studies on VQA and Visual Entailment
Figure 4 for CLIP Models are Few-shot Learners: Empirical Studies on VQA and Visual Entailment
Viaarxiv icon

DeepNet: Scaling Transformers to 1,000 Layers

Add code
Bookmark button
Alert button
Mar 01, 2022
Hongyu Wang, Shuming Ma, Li Dong, Shaohan Huang, Dongdong Zhang, Furu Wei

Figure 1 for DeepNet: Scaling Transformers to 1,000 Layers
Figure 2 for DeepNet: Scaling Transformers to 1,000 Layers
Figure 3 for DeepNet: Scaling Transformers to 1,000 Layers
Figure 4 for DeepNet: Scaling Transformers to 1,000 Layers
Viaarxiv icon

Controllable Natural Language Generation with Contrastive Prefixes

Add code
Bookmark button
Alert button
Feb 27, 2022
Jing Qian, Li Dong, Yelong Shen, Furu Wei, Weizhu Chen

Figure 1 for Controllable Natural Language Generation with Contrastive Prefixes
Figure 2 for Controllable Natural Language Generation with Contrastive Prefixes
Figure 3 for Controllable Natural Language Generation with Contrastive Prefixes
Figure 4 for Controllable Natural Language Generation with Contrastive Prefixes
Viaarxiv icon