Alert button
Picture for Shusheng Yang

Shusheng Yang

Alert button

Qwen Technical Report

Sep 28, 2023
Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, Binyuan Hui, Luo Ji, Mei Li, Junyang Lin, Runji Lin, Dayiheng Liu, Gao Liu, Chengqiang Lu, Keming Lu, Jianxin Ma, Rui Men, Xingzhang Ren, Xuancheng Ren, Chuanqi Tan, Sinan Tan, Jianhong Tu, Peng Wang, Shijie Wang, Wei Wang, Shengguang Wu, Benfeng Xu, Jin Xu, An Yang, Hao Yang, Jian Yang, Shusheng Yang, Yang Yao, Bowen Yu, Hongyi Yuan, Zheng Yuan, Jianwei Zhang, Xingxuan Zhang, Yichang Zhang, Zhenru Zhang, Chang Zhou, Jingren Zhou, Xiaohuan Zhou, Tianhang Zhu

Figure 1 for Qwen Technical Report
Figure 2 for Qwen Technical Report
Figure 3 for Qwen Technical Report
Figure 4 for Qwen Technical Report
Viaarxiv icon

Qwen-VL: A Versatile Vision-Language Model for Understanding, Localization, Text Reading, and Beyond

Sep 14, 2023
Jinze Bai, Shuai Bai, Shusheng Yang, Shijie Wang, Sinan Tan, Peng Wang, Junyang Lin, Chang Zhou, Jingren Zhou

Figure 1 for Qwen-VL: A Versatile Vision-Language Model for Understanding, Localization, Text Reading, and Beyond
Figure 2 for Qwen-VL: A Versatile Vision-Language Model for Understanding, Localization, Text Reading, and Beyond
Figure 3 for Qwen-VL: A Versatile Vision-Language Model for Understanding, Localization, Text Reading, and Beyond
Figure 4 for Qwen-VL: A Versatile Vision-Language Model for Understanding, Localization, Text Reading, and Beyond
Viaarxiv icon

TouchStone: Evaluating Vision-Language Models by Language Models

Sep 04, 2023
Shuai Bai, Shusheng Yang, Jinze Bai, Peng Wang, Xingxuan Zhang, Junyang Lin, Xinggang Wang, Chang Zhou, Jingren Zhou

Figure 1 for TouchStone: Evaluating Vision-Language Models by Language Models
Figure 2 for TouchStone: Evaluating Vision-Language Models by Language Models
Figure 3 for TouchStone: Evaluating Vision-Language Models by Language Models
Figure 4 for TouchStone: Evaluating Vision-Language Models by Language Models
Viaarxiv icon

Qwen-VL: A Frontier Large Vision-Language Model with Versatile Abilities

Aug 24, 2023
Jinze Bai, Shuai Bai, Shusheng Yang, Shijie Wang, Sinan Tan, Peng Wang, Junyang Lin, Chang Zhou, Jingren Zhou

Figure 1 for Qwen-VL: A Frontier Large Vision-Language Model with Versatile Abilities
Figure 2 for Qwen-VL: A Frontier Large Vision-Language Model with Versatile Abilities
Figure 3 for Qwen-VL: A Frontier Large Vision-Language Model with Versatile Abilities
Figure 4 for Qwen-VL: A Frontier Large Vision-Language Model with Versatile Abilities
Viaarxiv icon

ViTMatte: Boosting Image Matting with Pretrained Plain Vision Transformers

May 24, 2023
Jingfeng Yao, Xinggang Wang, Shusheng Yang, Baoyuan Wang

Figure 1 for ViTMatte: Boosting Image Matting with Pretrained Plain Vision Transformers
Figure 2 for ViTMatte: Boosting Image Matting with Pretrained Plain Vision Transformers
Figure 3 for ViTMatte: Boosting Image Matting with Pretrained Plain Vision Transformers
Figure 4 for ViTMatte: Boosting Image Matting with Pretrained Plain Vision Transformers
Viaarxiv icon

MobileInst: Video Instance Segmentation on the Mobile

Mar 30, 2023
Renhong Zhang, Tianheng Cheng, Shusheng Yang, Haoyi Jiang, Shuai Zhang, Jiancheng Lyu, Xin Li, Xiaowen Ying, Dashan Gao, Wenyu Liu, Xinggang Wang

Figure 1 for MobileInst: Video Instance Segmentation on the Mobile
Figure 2 for MobileInst: Video Instance Segmentation on the Mobile
Figure 3 for MobileInst: Video Instance Segmentation on the Mobile
Figure 4 for MobileInst: Video Instance Segmentation on the Mobile
Viaarxiv icon

Masked Visual Reconstruction in Language Semantic Space

Jan 17, 2023
Shusheng Yang, Yixiao Ge, Kun Yi, Dian Li, Ying Shan, Xiaohu Qie, Xinggang Wang

Figure 1 for Masked Visual Reconstruction in Language Semantic Space
Figure 2 for Masked Visual Reconstruction in Language Semantic Space
Figure 3 for Masked Visual Reconstruction in Language Semantic Space
Figure 4 for Masked Visual Reconstruction in Language Semantic Space
Viaarxiv icon

Masked Image Modeling with Denoising Contrast

May 19, 2022
Kun Yi, Yixiao Ge, Xiaotong Li, Shusheng Yang, Dian Li, Jianping Wu, Ying Shan, Xiaohu Qie

Figure 1 for Masked Image Modeling with Denoising Contrast
Figure 2 for Masked Image Modeling with Denoising Contrast
Figure 3 for Masked Image Modeling with Denoising Contrast
Figure 4 for Masked Image Modeling with Denoising Contrast
Viaarxiv icon

Temporally Efficient Vision Transformer for Video Instance Segmentation

Apr 18, 2022
Shusheng Yang, Xinggang Wang, Yu Li, Yuxin Fang, Jiemin Fang, Wenyu Liu, Xun Zhao, Ying Shan

Figure 1 for Temporally Efficient Vision Transformer for Video Instance Segmentation
Figure 2 for Temporally Efficient Vision Transformer for Video Instance Segmentation
Figure 3 for Temporally Efficient Vision Transformer for Video Instance Segmentation
Figure 4 for Temporally Efficient Vision Transformer for Video Instance Segmentation
Viaarxiv icon