Alert button
Picture for Zonghan Yang

Zonghan Yang

Alert button

ReAct Meets ActRe: Autonomous Annotations of Agent Trajectories for Contrastive Self-Training

Mar 21, 2024
Zonghan Yang, Peng Li, Ming Yan, Ji Zhang, Fei Huang, Yang Liu

Viaarxiv icon

PANDA: Preference Adaptation for Enhancing Domain-Specific Abilities of LLMs

Feb 20, 2024
An Liu, Zonghan Yang, Zhenhe Zhang, Qingyuan Hu, Peng Li, Ming Yan, Ji Zhang, Fei Huang, Yang Liu

Viaarxiv icon

Scaffolding Coordinates to Promote Vision-Language Coordination in Large Multi-Modal Models

Feb 19, 2024
Xuanyu Lei, Zonghan Yang, Xinrui Chen, Peng Li, Yang Liu

Viaarxiv icon

OneBit: Towards Extremely Low-bit Large Language Models

Feb 17, 2024
Yuzhuang Xu, Xu Han, Zonghan Yang, Shuo Wang, Qingfu Zhu, Zhiyuan Liu, Weidong Liu, Wanxiang Che

Viaarxiv icon

Towards Unified Alignment Between Agents, Humans, and Environment

Feb 14, 2024
Zonghan Yang, An Liu, Zijun Liu, Kaiming Liu, Fangzhou Xiong, Yile Wang, Zeyuan Yang, Qingyuan Hu, Xinrui Chen, Zhenhe Zhang, Fuwen Luo, Zhicheng Guo, Peng Li, Yang Liu

Viaarxiv icon

Adversarial Robust Memory-Based Continual Learner

Nov 29, 2023
Xiaoyue Mi, Fan Tang, Zonghan Yang, Danding Wang, Juan Cao, Peng Li, Yang Liu

Viaarxiv icon

Bridging the Gap between Decision and Logits in Decision-based Knowledge Distillation for Pre-trained Language Models

Jun 15, 2023
Qinhong Zhou, Zonghan Yang, Peng Li, Yang Liu

Figure 1 for Bridging the Gap between Decision and Logits in Decision-based Knowledge Distillation for Pre-trained Language Models
Figure 2 for Bridging the Gap between Decision and Logits in Decision-based Knowledge Distillation for Pre-trained Language Models
Figure 3 for Bridging the Gap between Decision and Logits in Decision-based Knowledge Distillation for Pre-trained Language Models
Figure 4 for Bridging the Gap between Decision and Logits in Decision-based Knowledge Distillation for Pre-trained Language Models
Viaarxiv icon

Arbitrary Few Parameters are Good Enough for Adapting Large-scale Pre-trained Language Models

Jun 04, 2023
Yusheng Su, Chi-Min Chan, Jiali Cheng, Yujia Qin, Yankai Lin, Shengding Hu, Zonghan Yang, Ning Ding, Zhiyuan Liu, Maosong Sun

Figure 1 for Arbitrary Few Parameters are Good Enough for Adapting Large-scale Pre-trained Language Models
Figure 2 for Arbitrary Few Parameters are Good Enough for Adapting Large-scale Pre-trained Language Models
Figure 3 for Arbitrary Few Parameters are Good Enough for Adapting Large-scale Pre-trained Language Models
Figure 4 for Arbitrary Few Parameters are Good Enough for Adapting Large-scale Pre-trained Language Models
Viaarxiv icon

Improving Adversarial Robustness of DEQs with Explicit Regulations Along the Neural Dynamics

Jun 02, 2023
Zonghan Yang, Peng Li, Tianyu Pang, Yang Liu

Figure 1 for Improving Adversarial Robustness of DEQs with Explicit Regulations Along the Neural Dynamics
Figure 2 for Improving Adversarial Robustness of DEQs with Explicit Regulations Along the Neural Dynamics
Figure 3 for Improving Adversarial Robustness of DEQs with Explicit Regulations Along the Neural Dynamics
Figure 4 for Improving Adversarial Robustness of DEQs with Explicit Regulations Along the Neural Dynamics
Viaarxiv icon