Alert button
Picture for Yichong Xu

Yichong Xu

Alert button

Violet

i-Code: An Integrative and Composable Multimodal Learning Framework

May 05, 2022
Ziyi Yang, Yuwei Fang, Chenguang Zhu, Reid Pryzant, Dongdong Chen, Yu Shi, Yichong Xu, Yao Qian, Mei Gao, Yi-Ling Chen, Liyang Lu, Yujia Xie, Robert Gmyr, Noel Codella, Naoyuki Kanda, Bin Xiao, Lu Yuan, Takuya Yoshioka, Michael Zeng, Xuedong Huang

Figure 1 for i-Code: An Integrative and Composable Multimodal Learning Framework
Figure 2 for i-Code: An Integrative and Composable Multimodal Learning Framework
Figure 3 for i-Code: An Integrative and Composable Multimodal Learning Framework
Figure 4 for i-Code: An Integrative and Composable Multimodal Learning Framework
Viaarxiv icon

Integrating Rankings into Quantized Scores in Peer Review

Apr 05, 2022
Yusha Liu, Yichong Xu, Nihar B. Shah, Aarti Singh

Figure 1 for Integrating Rankings into Quantized Scores in Peer Review
Figure 2 for Integrating Rankings into Quantized Scores in Peer Review
Figure 3 for Integrating Rankings into Quantized Scores in Peer Review
Figure 4 for Integrating Rankings into Quantized Scores in Peer Review
Viaarxiv icon

Training Data is More Valuable than You Think: A Simple and Effective Method by Retrieving from Training Data

Mar 16, 2022
Shuohang Wang, Yichong Xu, Yuwei Fang, Yang Liu, Siqi Sun, Ruochen Xu, Chenguang Zhu, Michael Zeng

Figure 1 for Training Data is More Valuable than You Think: A Simple and Effective Method by Retrieving from Training Data
Figure 2 for Training Data is More Valuable than You Think: A Simple and Effective Method by Retrieving from Training Data
Figure 3 for Training Data is More Valuable than You Think: A Simple and Effective Method by Retrieving from Training Data
Figure 4 for Training Data is More Valuable than You Think: A Simple and Effective Method by Retrieving from Training Data
Viaarxiv icon

Unsupervised Summarization with Customized Granularities

Jan 29, 2022
Ming Zhong, Yang Liu, Suyu Ge, Yuning Mao, Yizhu Jiao, Xingxing Zhang, Yichong Xu, Chenguang Zhu, Michael Zeng, Jiawei Han

Figure 1 for Unsupervised Summarization with Customized Granularities
Figure 2 for Unsupervised Summarization with Customized Granularities
Figure 3 for Unsupervised Summarization with Customized Granularities
Figure 4 for Unsupervised Summarization with Customized Granularities
Viaarxiv icon

Human Parity on CommonsenseQA: Augmenting Self-Attention with External Attention

Dec 14, 2021
Yichong Xu, Chenguang Zhu, Shuohang Wang, Siqi Sun, Hao Cheng, Xiaodong Liu, Jianfeng Gao, Pengcheng He, Michael Zeng, Xuedong Huang

Figure 1 for Human Parity on CommonsenseQA: Augmenting Self-Attention with External Attention
Figure 2 for Human Parity on CommonsenseQA: Augmenting Self-Attention with External Attention
Figure 3 for Human Parity on CommonsenseQA: Augmenting Self-Attention with External Attention
Figure 4 for Human Parity on CommonsenseQA: Augmenting Self-Attention with External Attention
Viaarxiv icon

An Empirical Study of Training End-to-End Vision-and-Language Transformers

Nov 25, 2021
Zi-Yi Dou, Yichong Xu, Zhe Gan, Jianfeng Wang, Shuohang Wang, Lijuan Wang, Chenguang Zhu, Pengchuan Zhang, Lu Yuan, Nanyun Peng, Zicheng Liu, Michael Zeng

Figure 1 for An Empirical Study of Training End-to-End Vision-and-Language Transformers
Figure 2 for An Empirical Study of Training End-to-End Vision-and-Language Transformers
Figure 3 for An Empirical Study of Training End-to-End Vision-and-Language Transformers
Figure 4 for An Empirical Study of Training End-to-End Vision-and-Language Transformers
Viaarxiv icon

Leveraging Knowledge in Multilingual Commonsense Reasoning

Oct 16, 2021
Yuwei Fang, Shuohang Wang, Yichong Xu, Ruochen Xu, Siqi Sun, Chenguang Zhu, Michael Zeng

Figure 1 for Leveraging Knowledge in Multilingual Commonsense Reasoning
Figure 2 for Leveraging Knowledge in Multilingual Commonsense Reasoning
Figure 3 for Leveraging Knowledge in Multilingual Commonsense Reasoning
Figure 4 for Leveraging Knowledge in Multilingual Commonsense Reasoning
Viaarxiv icon

Dict-BERT: Enhancing Language Model Pre-training with Dictionary

Oct 13, 2021
Wenhao Yu, Chenguang Zhu, Yuwei Fang, Donghan Yu, Shuohang Wang, Yichong Xu, Michael Zeng, Meng Jiang

Figure 1 for Dict-BERT: Enhancing Language Model Pre-training with Dictionary
Figure 2 for Dict-BERT: Enhancing Language Model Pre-training with Dictionary
Figure 3 for Dict-BERT: Enhancing Language Model Pre-training with Dictionary
Figure 4 for Dict-BERT: Enhancing Language Model Pre-training with Dictionary
Viaarxiv icon