Alert button
Picture for Zhoujun Cheng

Zhoujun Cheng

Alert button

OSWorld: Benchmarking Multimodal Agents for Open-Ended Tasks in Real Computer Environments

Add code
Bookmark button
Alert button
Apr 11, 2024
Tianbao Xie, Danyang Zhang, Jixuan Chen, Xiaochuan Li, Siheng Zhao, Ruisheng Cao, Toh Jing Hua, Zhoujun Cheng, Dongchan Shin, Fangyu Lei, Yitao Liu, Yiheng Xu, Shuyan Zhou, Silvio Savarese, Caiming Xiong, Victor Zhong, Tao Yu

Viaarxiv icon

What Are Tools Anyway? A Survey from the Language Model Perspective

Add code
Bookmark button
Alert button
Mar 18, 2024
Zhiruo Wang, Zhoujun Cheng, Hao Zhu, Daniel Fried, Graham Neubig

Viaarxiv icon

OpenAgents: An Open Platform for Language Agents in the Wild

Add code
Bookmark button
Alert button
Oct 16, 2023
Tianbao Xie, Fan Zhou, Zhoujun Cheng, Peng Shi, Luoxuan Weng, Yitao Liu, Toh Jing Hua, Junning Zhao, Qian Liu, Che Liu, Leo Z. Liu, Yiheng Xu, Hongjin Su, Dongchan Shin, Caiming Xiong, Tao Yu

Viaarxiv icon

Lemur: Harmonizing Natural Language and Code for Language Agents

Add code
Bookmark button
Alert button
Oct 10, 2023
Yiheng Xu, Hongjin Su, Chen Xing, Boyu Mi, Qian Liu, Weijia Shi, Binyuan Hui, Fan Zhou, Yitao Liu, Tianbao Xie, Zhoujun Cheng, Siheng Zhao, Lingpeng Kong, Bailin Wang, Caiming Xiong, Tao Yu

Figure 1 for Lemur: Harmonizing Natural Language and Code for Language Agents
Figure 2 for Lemur: Harmonizing Natural Language and Code for Language Agents
Figure 3 for Lemur: Harmonizing Natural Language and Code for Language Agents
Figure 4 for Lemur: Harmonizing Natural Language and Code for Language Agents
Viaarxiv icon

Batch Prompting: Efficient Inference with Large Language Model APIs

Add code
Bookmark button
Alert button
Jan 19, 2023
Zhoujun Cheng, Jungo Kasai, Tao Yu

Figure 1 for Batch Prompting: Efficient Inference with Large Language Model APIs
Figure 2 for Batch Prompting: Efficient Inference with Large Language Model APIs
Figure 3 for Batch Prompting: Efficient Inference with Large Language Model APIs
Figure 4 for Batch Prompting: Efficient Inference with Large Language Model APIs
Viaarxiv icon

Reflection of Thought: Inversely Eliciting Numerical Reasoning in Language Models via Solving Linear Systems

Add code
Bookmark button
Alert button
Oct 11, 2022
Fan Zhou, Haoyu Dong, Qian Liu, Zhoujun Cheng, Shi Han, Dongmei Zhang

Figure 1 for Reflection of Thought: Inversely Eliciting Numerical Reasoning in Language Models via Solving Linear Systems
Figure 2 for Reflection of Thought: Inversely Eliciting Numerical Reasoning in Language Models via Solving Linear Systems
Figure 3 for Reflection of Thought: Inversely Eliciting Numerical Reasoning in Language Models via Solving Linear Systems
Figure 4 for Reflection of Thought: Inversely Eliciting Numerical Reasoning in Language Models via Solving Linear Systems
Viaarxiv icon

Binding Language Models in Symbolic Languages

Add code
Bookmark button
Alert button
Oct 06, 2022
Zhoujun Cheng, Tianbao Xie, Peng Shi, Chengzu Li, Rahul Nadkarni, Yushi Hu, Caiming Xiong, Dragomir Radev, Mari Ostendorf, Luke Zettlemoyer, Noah A. Smith, Tao Yu

Figure 1 for Binding Language Models in Symbolic Languages
Figure 2 for Binding Language Models in Symbolic Languages
Figure 3 for Binding Language Models in Symbolic Languages
Figure 4 for Binding Language Models in Symbolic Languages
Viaarxiv icon

TaCube: Pre-computing Data Cubes for Answering Numerical-Reasoning Questions over Tabular Data

Add code
Bookmark button
Alert button
May 25, 2022
Fan Zhou, Mengkang Hu, Haoyu Dong, Zhoujun Cheng, Shi Han, Dongmei Zhang

Figure 1 for TaCube: Pre-computing Data Cubes for Answering Numerical-Reasoning Questions over Tabular Data
Figure 2 for TaCube: Pre-computing Data Cubes for Answering Numerical-Reasoning Questions over Tabular Data
Figure 3 for TaCube: Pre-computing Data Cubes for Answering Numerical-Reasoning Questions over Tabular Data
Figure 4 for TaCube: Pre-computing Data Cubes for Answering Numerical-Reasoning Questions over Tabular Data
Viaarxiv icon

Table Pre-training: A Survey on Model Architectures, Pretraining Objectives, and Downstream Tasks

Add code
Bookmark button
Alert button
Jan 27, 2022
Haoyu Dong, Zhoujun Cheng, Xinyi He, Mengyu Zhou, Anda Zhou, Fan Zhou, Ao Liu, Shi Han, Dongmei Zhang

Figure 1 for Table Pre-training: A Survey on Model Architectures, Pretraining Objectives, and Downstream Tasks
Figure 2 for Table Pre-training: A Survey on Model Architectures, Pretraining Objectives, and Downstream Tasks
Figure 3 for Table Pre-training: A Survey on Model Architectures, Pretraining Objectives, and Downstream Tasks
Figure 4 for Table Pre-training: A Survey on Model Architectures, Pretraining Objectives, and Downstream Tasks
Viaarxiv icon

Table Pretraining: A Survey on Model Architectures, Pretraining Objectives, and Downstream Tasks

Add code
Bookmark button
Alert button
Jan 24, 2022
Haoyu Dong, Zhoujun Cheng, Xinyi He, Mengyu Zhou, Anda Zhou, Fan Zhou, Ao Liu, Shi Han, Dongmei Zhang

Figure 1 for Table Pretraining: A Survey on Model Architectures, Pretraining Objectives, and Downstream Tasks
Figure 2 for Table Pretraining: A Survey on Model Architectures, Pretraining Objectives, and Downstream Tasks
Figure 3 for Table Pretraining: A Survey on Model Architectures, Pretraining Objectives, and Downstream Tasks
Figure 4 for Table Pretraining: A Survey on Model Architectures, Pretraining Objectives, and Downstream Tasks
Viaarxiv icon