Alert button
Picture for Zonghan Yang

Zonghan Yang

Alert button

ReAct Meets ActRe: Autonomous Annotation of Agent Trajectories for Contrastive Self-Training

Add code
Bookmark button
Alert button
Mar 25, 2024
Zonghan Yang, Peng Li, Ming Yan, Ji Zhang, Fei Huang, Yang Liu

Figure 1 for ReAct Meets ActRe: Autonomous Annotation of Agent Trajectories for Contrastive Self-Training
Figure 2 for ReAct Meets ActRe: Autonomous Annotation of Agent Trajectories for Contrastive Self-Training
Figure 3 for ReAct Meets ActRe: Autonomous Annotation of Agent Trajectories for Contrastive Self-Training
Figure 4 for ReAct Meets ActRe: Autonomous Annotation of Agent Trajectories for Contrastive Self-Training
Viaarxiv icon

ReAct Meets ActRe: Autonomous Annotations of Agent Trajectories for Contrastive Self-Training

Add code
Bookmark button
Alert button
Mar 21, 2024
Zonghan Yang, Peng Li, Ming Yan, Ji Zhang, Fei Huang, Yang Liu

Figure 1 for ReAct Meets ActRe: Autonomous Annotations of Agent Trajectories for Contrastive Self-Training
Figure 2 for ReAct Meets ActRe: Autonomous Annotations of Agent Trajectories for Contrastive Self-Training
Figure 3 for ReAct Meets ActRe: Autonomous Annotations of Agent Trajectories for Contrastive Self-Training
Figure 4 for ReAct Meets ActRe: Autonomous Annotations of Agent Trajectories for Contrastive Self-Training
Viaarxiv icon

PANDA: Preference Adaptation for Enhancing Domain-Specific Abilities of LLMs

Add code
Bookmark button
Alert button
Feb 20, 2024
An Liu, Zonghan Yang, Zhenhe Zhang, Qingyuan Hu, Peng Li, Ming Yan, Ji Zhang, Fei Huang, Yang Liu

Viaarxiv icon

Scaffolding Coordinates to Promote Vision-Language Coordination in Large Multi-Modal Models

Add code
Bookmark button
Alert button
Feb 19, 2024
Xuanyu Lei, Zonghan Yang, Xinrui Chen, Peng Li, Yang Liu

Viaarxiv icon

OneBit: Towards Extremely Low-bit Large Language Models

Add code
Bookmark button
Alert button
Feb 17, 2024
Yuzhuang Xu, Xu Han, Zonghan Yang, Shuo Wang, Qingfu Zhu, Zhiyuan Liu, Weidong Liu, Wanxiang Che

Viaarxiv icon

Towards Unified Alignment Between Agents, Humans, and Environment

Add code
Bookmark button
Alert button
Feb 14, 2024
Zonghan Yang, An Liu, Zijun Liu, Kaiming Liu, Fangzhou Xiong, Yile Wang, Zeyuan Yang, Qingyuan Hu, Xinrui Chen, Zhenhe Zhang, Fuwen Luo, Zhicheng Guo, Peng Li, Yang Liu

Viaarxiv icon

Adversarial Robust Memory-Based Continual Learner

Add code
Bookmark button
Alert button
Nov 29, 2023
Xiaoyue Mi, Fan Tang, Zonghan Yang, Danding Wang, Juan Cao, Peng Li, Yang Liu

Viaarxiv icon

Bridging the Gap between Decision and Logits in Decision-based Knowledge Distillation for Pre-trained Language Models

Add code
Bookmark button
Alert button
Jun 15, 2023
Qinhong Zhou, Zonghan Yang, Peng Li, Yang Liu

Figure 1 for Bridging the Gap between Decision and Logits in Decision-based Knowledge Distillation for Pre-trained Language Models
Figure 2 for Bridging the Gap between Decision and Logits in Decision-based Knowledge Distillation for Pre-trained Language Models
Figure 3 for Bridging the Gap between Decision and Logits in Decision-based Knowledge Distillation for Pre-trained Language Models
Figure 4 for Bridging the Gap between Decision and Logits in Decision-based Knowledge Distillation for Pre-trained Language Models
Viaarxiv icon

Arbitrary Few Parameters are Good Enough for Adapting Large-scale Pre-trained Language Models

Add code
Bookmark button
Alert button
Jun 04, 2023
Yusheng Su, Chi-Min Chan, Jiali Cheng, Yujia Qin, Yankai Lin, Shengding Hu, Zonghan Yang, Ning Ding, Zhiyuan Liu, Maosong Sun

Figure 1 for Arbitrary Few Parameters are Good Enough for Adapting Large-scale Pre-trained Language Models
Figure 2 for Arbitrary Few Parameters are Good Enough for Adapting Large-scale Pre-trained Language Models
Figure 3 for Arbitrary Few Parameters are Good Enough for Adapting Large-scale Pre-trained Language Models
Figure 4 for Arbitrary Few Parameters are Good Enough for Adapting Large-scale Pre-trained Language Models
Viaarxiv icon