Alert button
Picture for Weizhu Chen

Weizhu Chen

Alert button

CodeRetriever: Unimodal and Bimodal Contrastive Learning

Add code
Bookmark button
Alert button
Jan 26, 2022
Xiaonan Li, Yeyun Gong, Yelong Shen, Xipeng Qiu, Hang Zhang, Bolun Yao, Weizhen Qi, Daxin Jiang, Weizhu Chen, Nan Duan

Figure 1 for CodeRetriever: Unimodal and Bimodal Contrastive Learning
Figure 2 for CodeRetriever: Unimodal and Bimodal Contrastive Learning
Figure 3 for CodeRetriever: Unimodal and Bimodal Contrastive Learning
Figure 4 for CodeRetriever: Unimodal and Bimodal Contrastive Learning
Viaarxiv icon

Contextual Bandit Applications in Customer Support Bot

Add code
Bookmark button
Alert button
Dec 06, 2021
Sandra Sajeev, Jade Huang, Nikos Karampatziakis, Matthew Hall, Sebastian Kochman, Weizhu Chen

Figure 1 for Contextual Bandit Applications in Customer Support Bot
Figure 2 for Contextual Bandit Applications in Customer Support Bot
Figure 3 for Contextual Bandit Applications in Customer Support Bot
Figure 4 for Contextual Bandit Applications in Customer Support Bot
Viaarxiv icon

DeBERTaV3: Improving DeBERTa using ELECTRA-Style Pre-Training with Gradient-Disentangled Embedding Sharing

Add code
Bookmark button
Alert button
Nov 18, 2021
Pengcheng He, Jianfeng Gao, Weizhu Chen

Figure 1 for DeBERTaV3: Improving DeBERTa using ELECTRA-Style Pre-Training with Gradient-Disentangled Embedding Sharing
Figure 2 for DeBERTaV3: Improving DeBERTa using ELECTRA-Style Pre-Training with Gradient-Disentangled Embedding Sharing
Figure 3 for DeBERTaV3: Improving DeBERTa using ELECTRA-Style Pre-Training with Gradient-Disentangled Embedding Sharing
Figure 4 for DeBERTaV3: Improving DeBERTa using ELECTRA-Style Pre-Training with Gradient-Disentangled Embedding Sharing
Viaarxiv icon

DSEE: Dually Sparsity-embedded Efficient Tuning of Pre-trained Language Models

Add code
Bookmark button
Alert button
Oct 30, 2021
Xuxi Chen, Tianlong Chen, Yu Cheng, Weizhu Chen, Zhangyang Wang, Ahmed Hassan Awadallah

Figure 1 for DSEE: Dually Sparsity-embedded Efficient Tuning of Pre-trained Language Models
Figure 2 for DSEE: Dually Sparsity-embedded Efficient Tuning of Pre-trained Language Models
Figure 3 for DSEE: Dually Sparsity-embedded Efficient Tuning of Pre-trained Language Models
Figure 4 for DSEE: Dually Sparsity-embedded Efficient Tuning of Pre-trained Language Models
Viaarxiv icon

Adversarial Retriever-Ranker for dense text retrieval

Add code
Bookmark button
Alert button
Oct 29, 2021
Hang Zhang, Yeyun Gong, Yelong Shen, Jiancheng Lv, Nan Duan, Weizhu Chen

Figure 1 for Adversarial Retriever-Ranker for dense text retrieval
Figure 2 for Adversarial Retriever-Ranker for dense text retrieval
Figure 3 for Adversarial Retriever-Ranker for dense text retrieval
Figure 4 for Adversarial Retriever-Ranker for dense text retrieval
Viaarxiv icon

A Good Prompt Is Worth Millions of Parameters? Low-resource Prompt-based Learning for Vision-Language Models

Add code
Bookmark button
Alert button
Oct 16, 2021
Woojeong Jin, Yu Cheng, Yelong Shen, Weizhu Chen, Xiang Ren

Figure 1 for A Good Prompt Is Worth Millions of Parameters? Low-resource Prompt-based Learning for Vision-Language Models
Figure 2 for A Good Prompt Is Worth Millions of Parameters? Low-resource Prompt-based Learning for Vision-Language Models
Figure 3 for A Good Prompt Is Worth Millions of Parameters? Low-resource Prompt-based Learning for Vision-Language Models
Figure 4 for A Good Prompt Is Worth Millions of Parameters? Low-resource Prompt-based Learning for Vision-Language Models
Viaarxiv icon

XLM-K: Improving Cross-Lingual Language Model Pre-Training with Multilingual Knowledge

Add code
Bookmark button
Alert button
Sep 26, 2021
Xiaoze Jiang, Yaobo Liang, Weizhu Chen, Nan Duan

Figure 1 for XLM-K: Improving Cross-Lingual Language Model Pre-Training with Multilingual Knowledge
Figure 2 for XLM-K: Improving Cross-Lingual Language Model Pre-Training with Multilingual Knowledge
Figure 3 for XLM-K: Improving Cross-Lingual Language Model Pre-Training with Multilingual Knowledge
Figure 4 for XLM-K: Improving Cross-Lingual Language Model Pre-Training with Multilingual Knowledge
Viaarxiv icon

ARCH: Efficient Adversarial Regularized Training with Caching

Add code
Bookmark button
Alert button
Sep 15, 2021
Simiao Zuo, Chen Liang, Haoming Jiang, Pengcheng He, Xiaodong Liu, Jianfeng Gao, Weizhu Chen, Tuo Zhao

Figure 1 for ARCH: Efficient Adversarial Regularized Training with Caching
Figure 2 for ARCH: Efficient Adversarial Regularized Training with Caching
Figure 3 for ARCH: Efficient Adversarial Regularized Training with Caching
Figure 4 for ARCH: Efficient Adversarial Regularized Training with Caching
Viaarxiv icon

LoRA: Low-Rank Adaptation of Large Language Models

Add code
Bookmark button
Alert button
Jun 17, 2021
Edward J. Hu, Yelong Shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Weizhu Chen

Figure 1 for LoRA: Low-Rank Adaptation of Large Language Models
Figure 2 for LoRA: Low-Rank Adaptation of Large Language Models
Figure 3 for LoRA: Low-Rank Adaptation of Large Language Models
Figure 4 for LoRA: Low-Rank Adaptation of Large Language Models
Viaarxiv icon