Alert button
Picture for Chang Zhou

Chang Zhou

Alert button

Prompt Tuning for Generative Multimodal Pretrained Models

Add code
Bookmark button
Alert button
Aug 04, 2022
Hao Yang, Junyang Lin, An Yang, Peng Wang, Chang Zhou, Hongxia Yang

Figure 1 for Prompt Tuning for Generative Multimodal Pretrained Models
Figure 2 for Prompt Tuning for Generative Multimodal Pretrained Models
Figure 3 for Prompt Tuning for Generative Multimodal Pretrained Models
Figure 4 for Prompt Tuning for Generative Multimodal Pretrained Models
Viaarxiv icon

Single Stage Virtual Try-on via Deformable Attention Flows

Add code
Bookmark button
Alert button
Jul 19, 2022
Shuai Bai, Huiling Zhou, Zhikang Li, Chang Zhou, Hongxia Yang

Figure 1 for Single Stage Virtual Try-on via Deformable Attention Flows
Figure 2 for Single Stage Virtual Try-on via Deformable Attention Flows
Figure 3 for Single Stage Virtual Try-on via Deformable Attention Flows
Figure 4 for Single Stage Virtual Try-on via Deformable Attention Flows
Viaarxiv icon

Instance-wise Prompt Tuning for Pretrained Language Models

Add code
Bookmark button
Alert button
Jun 04, 2022
Yuezihan Jiang, Hao Yang, Junyang Lin, Hanyu Zhao, An Yang, Chang Zhou, Hongxia Yang, Zhi Yang, Bin Cui

Figure 1 for Instance-wise Prompt Tuning for Pretrained Language Models
Figure 2 for Instance-wise Prompt Tuning for Pretrained Language Models
Figure 3 for Instance-wise Prompt Tuning for Pretrained Language Models
Figure 4 for Instance-wise Prompt Tuning for Pretrained Language Models
Viaarxiv icon

M6-Fashion: High-Fidelity Multi-modal Image Generation and Editing

Add code
Bookmark button
Alert button
May 24, 2022
Zhikang Li, Huiling Zhou, Shuai Bai, Peike Li, Chang Zhou, Hongxia Yang

Figure 1 for M6-Fashion: High-Fidelity Multi-modal Image Generation and Editing
Figure 2 for M6-Fashion: High-Fidelity Multi-modal Image Generation and Editing
Figure 3 for M6-Fashion: High-Fidelity Multi-modal Image Generation and Editing
Figure 4 for M6-Fashion: High-Fidelity Multi-modal Image Generation and Editing
Viaarxiv icon

M6-Rec: Generative Pretrained Language Models are Open-Ended Recommender Systems

Add code
Bookmark button
Alert button
May 19, 2022
Zeyu Cui, Jianxin Ma, Chang Zhou, Jingren Zhou, Hongxia Yang

Figure 1 for M6-Rec: Generative Pretrained Language Models are Open-Ended Recommender Systems
Figure 2 for M6-Rec: Generative Pretrained Language Models are Open-Ended Recommender Systems
Figure 3 for M6-Rec: Generative Pretrained Language Models are Open-Ended Recommender Systems
Figure 4 for M6-Rec: Generative Pretrained Language Models are Open-Ended Recommender Systems
Viaarxiv icon

In-N-Out Generative Learning for Dense Unsupervised Video Segmentation

Add code
Bookmark button
Alert button
Apr 11, 2022
Xiao Pan, Peike Li, Zongxin Yang, Huiling Zhou, Chang Zhou, Hongxia Yang, Jingren Zhou, Yi Yang

Figure 1 for In-N-Out Generative Learning for Dense Unsupervised Video Segmentation
Figure 2 for In-N-Out Generative Learning for Dense Unsupervised Video Segmentation
Figure 3 for In-N-Out Generative Learning for Dense Unsupervised Video Segmentation
Figure 4 for In-N-Out Generative Learning for Dense Unsupervised Video Segmentation
Viaarxiv icon

Modality Competition: What Makes Joint Training of Multi-modal Network Fail in Deep Learning? (Provably)

Add code
Bookmark button
Alert button
Mar 23, 2022
Yu Huang, Junyang Lin, Chang Zhou, Hongxia Yang, Longbo Huang

Figure 1 for Modality Competition: What Makes Joint Training of Multi-modal Network Fail in Deep Learning? (Provably)
Figure 2 for Modality Competition: What Makes Joint Training of Multi-modal Network Fail in Deep Learning? (Provably)
Viaarxiv icon

Unifying Architectures, Tasks, and Modalities Through a Simple Sequence-to-Sequence Learning Framework

Add code
Bookmark button
Alert button
Feb 07, 2022
Peng Wang, An Yang, Rui Men, Junyang Lin, Shuai Bai, Zhikang Li, Jianxin Ma, Chang Zhou, Jingren Zhou, Hongxia Yang

Figure 1 for Unifying Architectures, Tasks, and Modalities Through a Simple Sequence-to-Sequence Learning Framework
Figure 2 for Unifying Architectures, Tasks, and Modalities Through a Simple Sequence-to-Sequence Learning Framework
Figure 3 for Unifying Architectures, Tasks, and Modalities Through a Simple Sequence-to-Sequence Learning Framework
Figure 4 for Unifying Architectures, Tasks, and Modalities Through a Simple Sequence-to-Sequence Learning Framework
Viaarxiv icon

Are we really making much progress? Revisiting, benchmarking, and refining heterogeneous graph neural networks

Add code
Bookmark button
Alert button
Dec 30, 2021
Qingsong Lv, Ming Ding, Qiang Liu, Yuxiang Chen, Wenzheng Feng, Siming He, Chang Zhou, Jianguo Jiang, Yuxiao Dong, Jie Tang

Figure 1 for Are we really making much progress? Revisiting, benchmarking, and refining heterogeneous graph neural networks
Figure 2 for Are we really making much progress? Revisiting, benchmarking, and refining heterogeneous graph neural networks
Figure 3 for Are we really making much progress? Revisiting, benchmarking, and refining heterogeneous graph neural networks
Figure 4 for Are we really making much progress? Revisiting, benchmarking, and refining heterogeneous graph neural networks
Viaarxiv icon

M6-10T: A Sharing-Delinking Paradigm for Efficient Multi-Trillion Parameter Pretraining

Add code
Bookmark button
Alert button
Oct 25, 2021
Junyang Lin, An Yang, Jinze Bai, Chang Zhou, Le Jiang, Xianyan Jia, Ang Wang, Jie Zhang, Yong Li, Wei Lin, Jingren Zhou, Hongxia Yang

Figure 1 for M6-10T: A Sharing-Delinking Paradigm for Efficient Multi-Trillion Parameter Pretraining
Figure 2 for M6-10T: A Sharing-Delinking Paradigm for Efficient Multi-Trillion Parameter Pretraining
Figure 3 for M6-10T: A Sharing-Delinking Paradigm for Efficient Multi-Trillion Parameter Pretraining
Figure 4 for M6-10T: A Sharing-Delinking Paradigm for Efficient Multi-Trillion Parameter Pretraining
Viaarxiv icon