Alert button
Picture for Baobao Chang

Baobao Chang

Alert button

VeCAF: VLM-empowered Collaborative Active Finetuning with Training Objective Awareness

Jan 15, 2024
Rongyu Zhang, Zefan Cai, Huanrui Yang, Zidong Liu, Denis Gudovskiy, Tomoyuki Okuno, Yohei Nakata, Kurt Keutzer, Baobao Chang, Yuan Du, Li Du, Shanghang Zhang

Viaarxiv icon

ML-Bench: Large Language Models Leverage Open-source Libraries for Machine Learning Tasks

Nov 16, 2023
Yuliang Liu, Xiangru Tang, Zefan Cai, Junjie Lu, Yichi Zhang, Yanjun Shao, Zexuan Deng, Helan Hu, Zengxian Yang, Kaikai An, Ruijun Huang, Shuzheng Si, Sheng Chen, Haozhe Zhao, Zhengliang Li, Liang Chen, Yiming Zong, Yan Wang, Tianyu Liu, Zhiwei Jiang, Baobao Chang, Yujia Qin, Wangchunshu Zhou, Yilun Zhao, Arman Cohan, Mark Gerstein

Viaarxiv icon

Distantly-Supervised Named Entity Recognition with Uncertainty-aware Teacher Learning and Student-student Collaborative Learning

Nov 14, 2023
Helan Hu, Shuzheng Si, Haozhe Zhao, Shuang Zeng, Kaikai An, Zefan Cai, Baobao Chang

Viaarxiv icon

Coarse-to-Fine Dual Encoders are Better Frame Identification Learners

Oct 20, 2023
Kaikai An, Ce Zheng, Bofei Gao, Haozhe Zhao, Baobao Chang

Viaarxiv icon

Towards End-to-End Embodied Decision Making via Multi-modal Large Language Model: Explorations with GPT4-Vision and Beyond

Oct 16, 2023
Liang Chen, Yichi Zhang, Shuhuai Ren, Haozhe Zhao, Zefan Cai, Yuchi Wang, Peiyi Wang, Tianyu Liu, Baobao Chang

Figure 1 for Towards End-to-End Embodied Decision Making via Multi-modal Large Language Model: Explorations with GPT4-Vision and Beyond
Figure 2 for Towards End-to-End Embodied Decision Making via Multi-modal Large Language Model: Explorations with GPT4-Vision and Beyond
Figure 3 for Towards End-to-End Embodied Decision Making via Multi-modal Large Language Model: Explorations with GPT4-Vision and Beyond
Figure 4 for Towards End-to-End Embodied Decision Making via Multi-modal Large Language Model: Explorations with GPT4-Vision and Beyond
Viaarxiv icon

Guiding AMR Parsing with Reverse Graph Linearization

Oct 13, 2023
Bofei Gao, Liang Chen, Peiyi Wang, Zhifang Sui, Baobao Chang

Figure 1 for Guiding AMR Parsing with Reverse Graph Linearization
Figure 2 for Guiding AMR Parsing with Reverse Graph Linearization
Figure 3 for Guiding AMR Parsing with Reverse Graph Linearization
Figure 4 for Guiding AMR Parsing with Reverse Graph Linearization
Viaarxiv icon

MMICL: Empowering Vision-language Model with Multi-Modal In-Context Learning

Oct 02, 2023
Haozhe Zhao, Zefan Cai, Shuzheng Si, Xiaojian Ma, Kaikai An, Liang Chen, Zixuan Liu, Sheng Wang, Wenjuan Han, Baobao Chang

Figure 1 for MMICL: Empowering Vision-language Model with Multi-Modal In-Context Learning
Figure 2 for MMICL: Empowering Vision-language Model with Multi-Modal In-Context Learning
Figure 3 for MMICL: Empowering Vision-language Model with Multi-Modal In-Context Learning
Figure 4 for MMICL: Empowering Vision-language Model with Multi-Modal In-Context Learning
Viaarxiv icon

Mining Clues from Incomplete Utterance: A Query-enhanced Network for Incomplete Utterance Rewriting

Jul 03, 2023
Shuzheng Si, Shuang Zeng, Baobao Chang

Figure 1 for Mining Clues from Incomplete Utterance: A Query-enhanced Network for Incomplete Utterance Rewriting
Figure 2 for Mining Clues from Incomplete Utterance: A Query-enhanced Network for Incomplete Utterance Rewriting
Figure 3 for Mining Clues from Incomplete Utterance: A Query-enhanced Network for Incomplete Utterance Rewriting
Figure 4 for Mining Clues from Incomplete Utterance: A Query-enhanced Network for Incomplete Utterance Rewriting
Viaarxiv icon