Alert button
Picture for Guoxing Yang

Guoxing Yang

Alert button

TCM-GPT: Efficient Pre-training of Large Language Models for Domain Adaptation in Traditional Chinese Medicine

Add code
Bookmark button
Alert button
Nov 03, 2023
Guoxing Yang, Jianyu Shi, Zan Wang, Xiaohong Liu, Guangyu Wang

Viaarxiv icon

ClinicalGPT: Large Language Models Finetuned with Diverse Medical Data and Comprehensive Evaluation

Add code
Bookmark button
Alert button
Jun 16, 2023
Guangyu Wang, Guoxing Yang, Zongxin Du, Longjun Fan, Xiaohu Li

Figure 1 for ClinicalGPT: Large Language Models Finetuned with Diverse Medical Data and Comprehensive Evaluation
Figure 2 for ClinicalGPT: Large Language Models Finetuned with Diverse Medical Data and Comprehensive Evaluation
Figure 3 for ClinicalGPT: Large Language Models Finetuned with Diverse Medical Data and Comprehensive Evaluation
Figure 4 for ClinicalGPT: Large Language Models Finetuned with Diverse Medical Data and Comprehensive Evaluation
Viaarxiv icon

VDT: An Empirical Study on Video Diffusion with Transformers

Add code
Bookmark button
Alert button
May 22, 2023
Haoyu Lu, Guoxing Yang, Nanyi Fei, Yuqi Huo, Zhiwu Lu, Ping Luo, Mingyu Ding

Figure 1 for VDT: An Empirical Study on Video Diffusion with Transformers
Figure 2 for VDT: An Empirical Study on Video Diffusion with Transformers
Figure 3 for VDT: An Empirical Study on Video Diffusion with Transformers
Figure 4 for VDT: An Empirical Study on Video Diffusion with Transformers
Viaarxiv icon

UniAdapter: Unified Parameter-Efficient Transfer Learning for Cross-modal Modeling

Add code
Bookmark button
Alert button
Feb 13, 2023
Haoyu Lu, Mingyu Ding, Yuqi Huo, Guoxing Yang, Zhiwu Lu, Masayoshi Tomizuka, Wei Zhan

Figure 1 for UniAdapter: Unified Parameter-Efficient Transfer Learning for Cross-modal Modeling
Figure 2 for UniAdapter: Unified Parameter-Efficient Transfer Learning for Cross-modal Modeling
Figure 3 for UniAdapter: Unified Parameter-Efficient Transfer Learning for Cross-modal Modeling
Figure 4 for UniAdapter: Unified Parameter-Efficient Transfer Learning for Cross-modal Modeling
Viaarxiv icon

WenLan 2.0: Make AI Imagine via a Multimodal Foundation Model

Add code
Bookmark button
Alert button
Oct 27, 2021
Nanyi Fei, Zhiwu Lu, Yizhao Gao, Guoxing Yang, Yuqi Huo, Jingyuan Wen, Haoyu Lu, Ruihua Song, Xin Gao, Tao Xiang, Hao Sun, Ji-Rong Wen

Figure 1 for WenLan 2.0: Make AI Imagine via a Multimodal Foundation Model
Figure 2 for WenLan 2.0: Make AI Imagine via a Multimodal Foundation Model
Figure 3 for WenLan 2.0: Make AI Imagine via a Multimodal Foundation Model
Figure 4 for WenLan 2.0: Make AI Imagine via a Multimodal Foundation Model
Viaarxiv icon

WenLan: Bridging Vision and Language by Large-Scale Multi-Modal Pre-Training

Add code
Bookmark button
Alert button
Mar 19, 2021
Yuqi Huo, Manli Zhang, Guangzhen Liu, Haoyu Lu, Yizhao Gao, Guoxing Yang, Jingyuan Wen, Heng Zhang, Baogui Xu, Weihao Zheng, Zongzheng Xi, Yueqian Yang, Anwen Hu, Jinming Zhao, Ruichen Li, Yida Zhao, Liang Zhang, Yuqing Song, Xin Hong, Wanqing Cui, Danyang Hou, Yingyan Li, Junyi Li, Peiyu Liu, Zheng Gong, Chuhao Jin, Yuchong Sun, Shizhe Chen, Zhiwu Lu, Zhicheng Dou, Qin Jin, Yanyan Lan, Wayne Xin Zhao, Ruihua Song, Ji-Rong Wen

Figure 1 for WenLan: Bridging Vision and Language by Large-Scale Multi-Modal Pre-Training
Figure 2 for WenLan: Bridging Vision and Language by Large-Scale Multi-Modal Pre-Training
Figure 3 for WenLan: Bridging Vision and Language by Large-Scale Multi-Modal Pre-Training
Figure 4 for WenLan: Bridging Vision and Language by Large-Scale Multi-Modal Pre-Training
Viaarxiv icon