Alert button
Picture for Yang Wei

Yang Wei

Alert button

Enhance Reasoning for Large Language Models in the Game Werewolf

Add code
Bookmark button
Alert button
Feb 04, 2024
Shuang Wu, Liwen Zhu, Tao Yang, Shiwei Xu, Qiang Fu, Yang Wei, Haobo Fu

Viaarxiv icon

Self-Supervised Learning for SAR ATR with a Knowledge-Guided Predictive Architecture

Add code
Bookmark button
Alert button
Nov 26, 2023
Weijie Li, Yang Wei, Tianpeng Liu, Yuenan Hou, Yongxiang Liu, Li Liu

Viaarxiv icon

Make Pixels Dance: High-Dynamic Video Generation

Add code
Bookmark button
Alert button
Nov 18, 2023
Yan Zeng, Guoqiang Wei, Jiani Zheng, Jiaxin Zou, Yang Wei, Yuchen Zhang, Hang Li

Viaarxiv icon

Act As You Wish: Fine-Grained Control of Motion Diffusion Model with Hierarchical Semantic Graphs

Add code
Bookmark button
Alert button
Nov 02, 2023
Peng Jin, Yang Wu, Yanbo Fan, Zhongqian Sun, Yang Wei, Li Yuan

Viaarxiv icon

Patch Is Not All You Need

Add code
Bookmark button
Alert button
Aug 21, 2023
Changzhen Li, Jie Zhang, Yang Wei, Zhilong Ji, Jinfeng Bai, Shiguang Shan

Viaarxiv icon

What Matters in Training a GPT4-Style Language Model with Multimodal Inputs?

Add code
Bookmark button
Alert button
Jul 30, 2023
Yan Zeng, Hanbo Zhang, Jiani Zheng, Jiangnan Xia, Guoqiang Wei, Yang Wei, Yuchen Zhang, Tao Kong

Figure 1 for What Matters in Training a GPT4-Style Language Model with Multimodal Inputs?
Figure 2 for What Matters in Training a GPT4-Style Language Model with Multimodal Inputs?
Figure 3 for What Matters in Training a GPT4-Style Language Model with Multimodal Inputs?
Figure 4 for What Matters in Training a GPT4-Style Language Model with Multimodal Inputs?
Viaarxiv icon

Maximum Entropy Population Based Training for Zero-Shot Human-AI Coordination

Add code
Bookmark button
Alert button
Dec 22, 2021
Rui Zhao, Jinming Song, Hu Haifeng, Yang Gao, Yi Wu, Zhongqian Sun, Yang Wei

Figure 1 for Maximum Entropy Population Based Training for Zero-Shot Human-AI Coordination
Figure 2 for Maximum Entropy Population Based Training for Zero-Shot Human-AI Coordination
Figure 3 for Maximum Entropy Population Based Training for Zero-Shot Human-AI Coordination
Figure 4 for Maximum Entropy Population Based Training for Zero-Shot Human-AI Coordination
Viaarxiv icon

LightSeq2: Accelerated Training for Transformer-based Models on GPUs

Add code
Bookmark button
Alert button
Oct 27, 2021
Xiaohui Wang, Ying Xiong, Xian Qian, Yang Wei, Lei Li, Mingxuan Wang

Figure 1 for LightSeq2: Accelerated Training for Transformer-based Models on GPUs
Figure 2 for LightSeq2: Accelerated Training for Transformer-based Models on GPUs
Figure 3 for LightSeq2: Accelerated Training for Transformer-based Models on GPUs
Figure 4 for LightSeq2: Accelerated Training for Transformer-based Models on GPUs
Viaarxiv icon

LightSeq: Accelerated Training for Transformer-based Models on GPUs

Add code
Bookmark button
Alert button
Oct 12, 2021
Xiaohui Wang, Ying Xiong, Xian Qian, Yang Wei, Lei Li, Mingxuan Wang

Figure 1 for LightSeq: Accelerated Training for Transformer-based Models on GPUs
Figure 2 for LightSeq: Accelerated Training for Transformer-based Models on GPUs
Figure 3 for LightSeq: Accelerated Training for Transformer-based Models on GPUs
Figure 4 for LightSeq: Accelerated Training for Transformer-based Models on GPUs
Viaarxiv icon