Alert button
Picture for Junyang Lin

Junyang Lin

Alert button

ONE-PEACE: Exploring One General Representation Model Toward Unlimited Modalities

Add code
Bookmark button
Alert button
May 18, 2023
Peng Wang, Shijie Wang, Junyang Lin, Shuai Bai, Xiaohuan Zhou, Jingren Zhou, Xinggang Wang, Chang Zhou

Figure 1 for ONE-PEACE: Exploring One General Representation Model Toward Unlimited Modalities
Figure 2 for ONE-PEACE: Exploring One General Representation Model Toward Unlimited Modalities
Figure 3 for ONE-PEACE: Exploring One General Representation Model Toward Unlimited Modalities
Figure 4 for ONE-PEACE: Exploring One General Representation Model Toward Unlimited Modalities
Viaarxiv icon

Transferring General Multimodal Pretrained Models to Text Recognition

Add code
Bookmark button
Alert button
Dec 19, 2022
Junyang Lin, Xuancheng Ren, Yichang Zhang, Gao Liu, Peng Wang, An Yang, Chang Zhou

Figure 1 for Transferring General Multimodal Pretrained Models to Text Recognition
Figure 2 for Transferring General Multimodal Pretrained Models to Text Recognition
Figure 3 for Transferring General Multimodal Pretrained Models to Text Recognition
Figure 4 for Transferring General Multimodal Pretrained Models to Text Recognition
Viaarxiv icon

OFASys: A Multi-Modal Multi-Task Learning System for Building Generalist Models

Add code
Bookmark button
Alert button
Dec 08, 2022
Jinze Bai, Rui Men, Hao Yang, Xuancheng Ren, Kai Dang, Yichang Zhang, Xiaohuan Zhou, Peng Wang, Sinan Tan, An Yang, Zeyu Cui, Yu Han, Shuai Bai, Wenbin Ge, Jianxin Ma, Junyang Lin, Jingren Zhou, Chang Zhou

Figure 1 for OFASys: A Multi-Modal Multi-Task Learning System for Building Generalist Models
Figure 2 for OFASys: A Multi-Modal Multi-Task Learning System for Building Generalist Models
Figure 3 for OFASys: A Multi-Modal Multi-Task Learning System for Building Generalist Models
Figure 4 for OFASys: A Multi-Modal Multi-Task Learning System for Building Generalist Models
Viaarxiv icon

Chinese CLIP: Contrastive Vision-Language Pretraining in Chinese

Add code
Bookmark button
Alert button
Nov 03, 2022
An Yang, Junshu Pan, Junyang Lin, Rui Men, Yichang Zhang, Jingren Zhou, Chang Zhou

Figure 1 for Chinese CLIP: Contrastive Vision-Language Pretraining in Chinese
Figure 2 for Chinese CLIP: Contrastive Vision-Language Pretraining in Chinese
Figure 3 for Chinese CLIP: Contrastive Vision-Language Pretraining in Chinese
Figure 4 for Chinese CLIP: Contrastive Vision-Language Pretraining in Chinese
Viaarxiv icon

Prompt Tuning for Generative Multimodal Pretrained Models

Add code
Bookmark button
Alert button
Aug 04, 2022
Hao Yang, Junyang Lin, An Yang, Peng Wang, Chang Zhou, Hongxia Yang

Figure 1 for Prompt Tuning for Generative Multimodal Pretrained Models
Figure 2 for Prompt Tuning for Generative Multimodal Pretrained Models
Figure 3 for Prompt Tuning for Generative Multimodal Pretrained Models
Figure 4 for Prompt Tuning for Generative Multimodal Pretrained Models
Viaarxiv icon

Instance-wise Prompt Tuning for Pretrained Language Models

Add code
Bookmark button
Alert button
Jun 04, 2022
Yuezihan Jiang, Hao Yang, Junyang Lin, Hanyu Zhao, An Yang, Chang Zhou, Hongxia Yang, Zhi Yang, Bin Cui

Figure 1 for Instance-wise Prompt Tuning for Pretrained Language Models
Figure 2 for Instance-wise Prompt Tuning for Pretrained Language Models
Figure 3 for Instance-wise Prompt Tuning for Pretrained Language Models
Figure 4 for Instance-wise Prompt Tuning for Pretrained Language Models
Viaarxiv icon

Modality Competition: What Makes Joint Training of Multi-modal Network Fail in Deep Learning? (Provably)

Add code
Bookmark button
Alert button
Mar 23, 2022
Yu Huang, Junyang Lin, Chang Zhou, Hongxia Yang, Longbo Huang

Figure 1 for Modality Competition: What Makes Joint Training of Multi-modal Network Fail in Deep Learning? (Provably)
Figure 2 for Modality Competition: What Makes Joint Training of Multi-modal Network Fail in Deep Learning? (Provably)
Viaarxiv icon

Unifying Architectures, Tasks, and Modalities Through a Simple Sequence-to-Sequence Learning Framework

Add code
Bookmark button
Alert button
Feb 07, 2022
Peng Wang, An Yang, Rui Men, Junyang Lin, Shuai Bai, Zhikang Li, Jianxin Ma, Chang Zhou, Jingren Zhou, Hongxia Yang

Figure 1 for Unifying Architectures, Tasks, and Modalities Through a Simple Sequence-to-Sequence Learning Framework
Figure 2 for Unifying Architectures, Tasks, and Modalities Through a Simple Sequence-to-Sequence Learning Framework
Figure 3 for Unifying Architectures, Tasks, and Modalities Through a Simple Sequence-to-Sequence Learning Framework
Figure 4 for Unifying Architectures, Tasks, and Modalities Through a Simple Sequence-to-Sequence Learning Framework
Viaarxiv icon

KNAS: Green Neural Architecture Search

Add code
Bookmark button
Alert button
Nov 26, 2021
Jingjing Xu, Liang Zhao, Junyang Lin, Rundong Gao, Xu Sun, Hongxia Yang

Figure 1 for KNAS: Green Neural Architecture Search
Figure 2 for KNAS: Green Neural Architecture Search
Figure 3 for KNAS: Green Neural Architecture Search
Figure 4 for KNAS: Green Neural Architecture Search
Viaarxiv icon

M6-10T: A Sharing-Delinking Paradigm for Efficient Multi-Trillion Parameter Pretraining

Add code
Bookmark button
Alert button
Oct 25, 2021
Junyang Lin, An Yang, Jinze Bai, Chang Zhou, Le Jiang, Xianyan Jia, Ang Wang, Jie Zhang, Yong Li, Wei Lin, Jingren Zhou, Hongxia Yang

Figure 1 for M6-10T: A Sharing-Delinking Paradigm for Efficient Multi-Trillion Parameter Pretraining
Figure 2 for M6-10T: A Sharing-Delinking Paradigm for Efficient Multi-Trillion Parameter Pretraining
Figure 3 for M6-10T: A Sharing-Delinking Paradigm for Efficient Multi-Trillion Parameter Pretraining
Figure 4 for M6-10T: A Sharing-Delinking Paradigm for Efficient Multi-Trillion Parameter Pretraining
Viaarxiv icon