Alert button
Picture for An Yang

An Yang

Alert button

Qwen Technical Report

Add code
Bookmark button
Alert button
Sep 28, 2023
Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, Binyuan Hui, Luo Ji, Mei Li, Junyang Lin, Runji Lin, Dayiheng Liu, Gao Liu, Chengqiang Lu, Keming Lu, Jianxin Ma, Rui Men, Xingzhang Ren, Xuancheng Ren, Chuanqi Tan, Sinan Tan, Jianhong Tu, Peng Wang, Shijie Wang, Wei Wang, Shengguang Wu, Benfeng Xu, Jin Xu, An Yang, Hao Yang, Jian Yang, Shusheng Yang, Yang Yao, Bowen Yu, Hongyi Yuan, Zheng Yuan, Jianwei Zhang, Xingxuan Zhang, Yichang Zhang, Zhenru Zhang, Chang Zhou, Jingren Zhou, Xiaohuan Zhou, Tianhang Zhu

Figure 1 for Qwen Technical Report
Figure 2 for Qwen Technical Report
Figure 3 for Qwen Technical Report
Figure 4 for Qwen Technical Report
Viaarxiv icon

ExpertPrompting: Instructing Large Language Models to be Distinguished Experts

Add code
Bookmark button
Alert button
May 24, 2023
Benfeng Xu, An Yang, Junyang Lin, Quan Wang, Chang Zhou, Yongdong Zhang, Zhendong Mao

Figure 1 for ExpertPrompting: Instructing Large Language Models to be Distinguished Experts
Figure 2 for ExpertPrompting: Instructing Large Language Models to be Distinguished Experts
Figure 3 for ExpertPrompting: Instructing Large Language Models to be Distinguished Experts
Figure 4 for ExpertPrompting: Instructing Large Language Models to be Distinguished Experts
Viaarxiv icon

Transferring General Multimodal Pretrained Models to Text Recognition

Add code
Bookmark button
Alert button
Dec 19, 2022
Junyang Lin, Xuancheng Ren, Yichang Zhang, Gao Liu, Peng Wang, An Yang, Chang Zhou

Figure 1 for Transferring General Multimodal Pretrained Models to Text Recognition
Figure 2 for Transferring General Multimodal Pretrained Models to Text Recognition
Figure 3 for Transferring General Multimodal Pretrained Models to Text Recognition
Figure 4 for Transferring General Multimodal Pretrained Models to Text Recognition
Viaarxiv icon

OFASys: A Multi-Modal Multi-Task Learning System for Building Generalist Models

Add code
Bookmark button
Alert button
Dec 08, 2022
Jinze Bai, Rui Men, Hao Yang, Xuancheng Ren, Kai Dang, Yichang Zhang, Xiaohuan Zhou, Peng Wang, Sinan Tan, An Yang, Zeyu Cui, Yu Han, Shuai Bai, Wenbin Ge, Jianxin Ma, Junyang Lin, Jingren Zhou, Chang Zhou

Figure 1 for OFASys: A Multi-Modal Multi-Task Learning System for Building Generalist Models
Figure 2 for OFASys: A Multi-Modal Multi-Task Learning System for Building Generalist Models
Figure 3 for OFASys: A Multi-Modal Multi-Task Learning System for Building Generalist Models
Figure 4 for OFASys: A Multi-Modal Multi-Task Learning System for Building Generalist Models
Viaarxiv icon

Chinese CLIP: Contrastive Vision-Language Pretraining in Chinese

Add code
Bookmark button
Alert button
Nov 03, 2022
An Yang, Junshu Pan, Junyang Lin, Rui Men, Yichang Zhang, Jingren Zhou, Chang Zhou

Figure 1 for Chinese CLIP: Contrastive Vision-Language Pretraining in Chinese
Figure 2 for Chinese CLIP: Contrastive Vision-Language Pretraining in Chinese
Figure 3 for Chinese CLIP: Contrastive Vision-Language Pretraining in Chinese
Figure 4 for Chinese CLIP: Contrastive Vision-Language Pretraining in Chinese
Viaarxiv icon

Prompt Tuning for Generative Multimodal Pretrained Models

Add code
Bookmark button
Alert button
Aug 04, 2022
Hao Yang, Junyang Lin, An Yang, Peng Wang, Chang Zhou, Hongxia Yang

Figure 1 for Prompt Tuning for Generative Multimodal Pretrained Models
Figure 2 for Prompt Tuning for Generative Multimodal Pretrained Models
Figure 3 for Prompt Tuning for Generative Multimodal Pretrained Models
Figure 4 for Prompt Tuning for Generative Multimodal Pretrained Models
Viaarxiv icon

Instance-wise Prompt Tuning for Pretrained Language Models

Add code
Bookmark button
Alert button
Jun 04, 2022
Yuezihan Jiang, Hao Yang, Junyang Lin, Hanyu Zhao, An Yang, Chang Zhou, Hongxia Yang, Zhi Yang, Bin Cui

Figure 1 for Instance-wise Prompt Tuning for Pretrained Language Models
Figure 2 for Instance-wise Prompt Tuning for Pretrained Language Models
Figure 3 for Instance-wise Prompt Tuning for Pretrained Language Models
Figure 4 for Instance-wise Prompt Tuning for Pretrained Language Models
Viaarxiv icon

Unifying Architectures, Tasks, and Modalities Through a Simple Sequence-to-Sequence Learning Framework

Add code
Bookmark button
Alert button
Feb 07, 2022
Peng Wang, An Yang, Rui Men, Junyang Lin, Shuai Bai, Zhikang Li, Jianxin Ma, Chang Zhou, Jingren Zhou, Hongxia Yang

Figure 1 for Unifying Architectures, Tasks, and Modalities Through a Simple Sequence-to-Sequence Learning Framework
Figure 2 for Unifying Architectures, Tasks, and Modalities Through a Simple Sequence-to-Sequence Learning Framework
Figure 3 for Unifying Architectures, Tasks, and Modalities Through a Simple Sequence-to-Sequence Learning Framework
Figure 4 for Unifying Architectures, Tasks, and Modalities Through a Simple Sequence-to-Sequence Learning Framework
Viaarxiv icon

M6-10T: A Sharing-Delinking Paradigm for Efficient Multi-Trillion Parameter Pretraining

Add code
Bookmark button
Alert button
Oct 25, 2021
Junyang Lin, An Yang, Jinze Bai, Chang Zhou, Le Jiang, Xianyan Jia, Ang Wang, Jie Zhang, Yong Li, Wei Lin, Jingren Zhou, Hongxia Yang

Figure 1 for M6-10T: A Sharing-Delinking Paradigm for Efficient Multi-Trillion Parameter Pretraining
Figure 2 for M6-10T: A Sharing-Delinking Paradigm for Efficient Multi-Trillion Parameter Pretraining
Figure 3 for M6-10T: A Sharing-Delinking Paradigm for Efficient Multi-Trillion Parameter Pretraining
Figure 4 for M6-10T: A Sharing-Delinking Paradigm for Efficient Multi-Trillion Parameter Pretraining
Viaarxiv icon