Alert button
Picture for Wayne Xin Zhao

Wayne Xin Zhao

Alert button

Scaling Pre-trained Language Models to Deeper via Parameter-efficient Architecture

Add code
Bookmark button
Alert button
Mar 27, 2023
Peiyu Liu, Ze-Feng Gao, Wayne Xin Zhao, Ji-Rong Wen

Figure 1 for Scaling Pre-trained Language Models to Deeper via Parameter-efficient Architecture
Figure 2 for Scaling Pre-trained Language Models to Deeper via Parameter-efficient Architecture
Figure 3 for Scaling Pre-trained Language Models to Deeper via Parameter-efficient Architecture
Figure 4 for Scaling Pre-trained Language Models to Deeper via Parameter-efficient Architecture
Viaarxiv icon

Diffusion Models for Non-autoregressive Text Generation: A Survey

Add code
Bookmark button
Alert button
Mar 12, 2023
Yifan Li, Kun Zhou, Wayne Xin Zhao, Ji-Rong Wen

Figure 1 for Diffusion Models for Non-autoregressive Text Generation: A Survey
Figure 2 for Diffusion Models for Non-autoregressive Text Generation: A Survey
Viaarxiv icon

A Survey on Long Text Modeling with Transformers

Add code
Bookmark button
Alert button
Feb 28, 2023
Zican Dong, Tianyi Tang, Lunyi Li, Wayne Xin Zhao

Figure 1 for A Survey on Long Text Modeling with Transformers
Figure 2 for A Survey on Long Text Modeling with Transformers
Figure 3 for A Survey on Long Text Modeling with Transformers
Viaarxiv icon

Hybrid Contrastive Constraints for Multi-Scenario Ad Ranking

Add code
Bookmark button
Alert button
Feb 06, 2023
Shanlei Mu, Penghui Wei, Wayne Xin Zhao, Shaoguo Liu, Liang Wang, Bo Zheng

Figure 1 for Hybrid Contrastive Constraints for Multi-Scenario Ad Ranking
Figure 2 for Hybrid Contrastive Constraints for Multi-Scenario Ad Ranking
Figure 3 for Hybrid Contrastive Constraints for Multi-Scenario Ad Ranking
Figure 4 for Hybrid Contrastive Constraints for Multi-Scenario Ad Ranking
Viaarxiv icon

PDFormer: Propagation Delay-aware Dynamic Long-range Transformer for Traffic Flow Prediction

Add code
Bookmark button
Alert button
Jan 19, 2023
Jiawei Jiang, Chengkai Han, Wayne Xin Zhao, Jingyuan Wang

Figure 1 for PDFormer: Propagation Delay-aware Dynamic Long-range Transformer for Traffic Flow Prediction
Figure 2 for PDFormer: Propagation Delay-aware Dynamic Long-range Transformer for Traffic Flow Prediction
Figure 3 for PDFormer: Propagation Delay-aware Dynamic Long-range Transformer for Traffic Flow Prediction
Figure 4 for PDFormer: Propagation Delay-aware Dynamic Long-range Transformer for Traffic Flow Prediction
Viaarxiv icon

Continuous Trajectory Generation Based on Two-Stage GAN

Add code
Bookmark button
Alert button
Jan 16, 2023
Wenjun Jiang, Wayne Xin Zhao, Jingyuan Wang, Jiawei Jiang

Figure 1 for Continuous Trajectory Generation Based on Two-Stage GAN
Figure 2 for Continuous Trajectory Generation Based on Two-Stage GAN
Figure 3 for Continuous Trajectory Generation Based on Two-Stage GAN
Figure 4 for Continuous Trajectory Generation Based on Two-Stage GAN
Viaarxiv icon

TikTalk: A Multi-Modal Dialogue Dataset for Real-World Chitchat

Add code
Bookmark button
Alert button
Jan 14, 2023
Hongpeng Lin, Ludan Ruan, Wenke Xia, Peiyu Liu, Jingyuan Wen, Yixin Xu, Di Hu, Ruihua Song, Wayne Xin Zhao, Qin Jin, Zhiwu Lu

Figure 1 for TikTalk: A Multi-Modal Dialogue Dataset for Real-World Chitchat
Figure 2 for TikTalk: A Multi-Modal Dialogue Dataset for Real-World Chitchat
Figure 3 for TikTalk: A Multi-Modal Dialogue Dataset for Real-World Chitchat
Figure 4 for TikTalk: A Multi-Modal Dialogue Dataset for Real-World Chitchat
Viaarxiv icon

TextBox 2.0: A Text Generation Library with Pre-trained Language Models

Add code
Bookmark button
Alert button
Dec 26, 2022
Tianyi Tang, Junyi Li, Zhipeng Chen, Yiwen Hu, Zhuohao Yu, Wenxun Dai, Zican Dong, Xiaoxue Cheng, Yuhao Wang, Wayne Xin Zhao, Jian-Yun Nie, Ji-Rong Wen

Figure 1 for TextBox 2.0: A Text Generation Library with Pre-trained Language Models
Figure 2 for TextBox 2.0: A Text Generation Library with Pre-trained Language Models
Figure 3 for TextBox 2.0: A Text Generation Library with Pre-trained Language Models
Figure 4 for TextBox 2.0: A Text Generation Library with Pre-trained Language Models
Viaarxiv icon

Visually-augmented pretrained language models for NLP tasks without images

Add code
Bookmark button
Alert button
Dec 15, 2022
Hangyu Guo, Kun Zhou, Wayne Xin Zhao, Qinyu Zhang, Ji-Rong Wen

Figure 1 for Visually-augmented pretrained language models for NLP tasks without images
Figure 2 for Visually-augmented pretrained language models for NLP tasks without images
Figure 3 for Visually-augmented pretrained language models for NLP tasks without images
Figure 4 for Visually-augmented pretrained language models for NLP tasks without images
Viaarxiv icon

MASTER: Multi-task Pre-trained Bottlenecked Masked Autoencoders are Better Dense Retrievers

Add code
Bookmark button
Alert button
Dec 15, 2022
Kun Zhou, Xiao Liu, Yeyun Gong, Wayne Xin Zhao, Daxin Jiang, Nan Duan, Ji-Rong Wen

Figure 1 for MASTER: Multi-task Pre-trained Bottlenecked Masked Autoencoders are Better Dense Retrievers
Figure 2 for MASTER: Multi-task Pre-trained Bottlenecked Masked Autoencoders are Better Dense Retrievers
Figure 3 for MASTER: Multi-task Pre-trained Bottlenecked Masked Autoencoders are Better Dense Retrievers
Figure 4 for MASTER: Multi-task Pre-trained Bottlenecked Masked Autoencoders are Better Dense Retrievers
Viaarxiv icon