Alert button
Picture for Wenjuan Han

Wenjuan Han

Alert button

Towards Comprehensive Multimodal Perception: Introducing the Touch-Language-Vision Dataset

Mar 14, 2024
Ning Cheng, You Li, Jing Gao, Bin Fang, Jinan Xu, Wenjuan Han

Viaarxiv icon

TransGPT: Multi-modal Generative Pre-trained Transformer for Transportation

Feb 11, 2024
Peng Wang, Xiang Wei, Fangxu Hu, Wenjuan Han

Viaarxiv icon

TransportationGames: Benchmarking Transportation Knowledge of (Multimodal) Large Language Models

Jan 09, 2024
Xue Zhang, Xiangyu Shi, Xinyue Lou, Rui Qi, Yufeng Chen, Jinan Xu, Wenjuan Han

Viaarxiv icon

CLOVA: A Closed-Loop Visual Assistant with Tool Usage and Update

Dec 18, 2023
Zhi Gao, Yuntao Du, Xintong Zhang, Xiaojian Ma, Wenjuan Han, Song-Chun Zhu, Qing Li

Viaarxiv icon

On the Robustness of Question Rewriting Systems to Questions of Varying Hardness

Nov 12, 2023
Hai Ye, Hwee Tou Ng, Wenjuan Han

Viaarxiv icon

Get the Ball Rolling: Alerting Autonomous Robots When to Help to Close the Healthcare Loop

Nov 05, 2023
Jiaxin Shen, Yanyao Liu, Ziming Wang, Ziyuan Jiao, Yufeng Chen, Wenjuan Han

Viaarxiv icon

A Quality-based Syntactic Template Retriever for Syntactically-controlled Paraphrase Generation

Oct 20, 2023
Xue Zhang, Songming Zhang, Yunlong Liang, Yufeng Chen, Jian Liu, Wenjuan Han, Jinan Xu

Figure 1 for A Quality-based Syntactic Template Retriever for Syntactically-controlled Paraphrase Generation
Figure 2 for A Quality-based Syntactic Template Retriever for Syntactically-controlled Paraphrase Generation
Figure 3 for A Quality-based Syntactic Template Retriever for Syntactically-controlled Paraphrase Generation
Figure 4 for A Quality-based Syntactic Template Retriever for Syntactically-controlled Paraphrase Generation
Viaarxiv icon

MMICL: Empowering Vision-language Model with Multi-Modal In-Context Learning

Oct 02, 2023
Haozhe Zhao, Zefan Cai, Shuzheng Si, Xiaojian Ma, Kaikai An, Liang Chen, Zixuan Liu, Sheng Wang, Wenjuan Han, Baobao Chang

Figure 1 for MMICL: Empowering Vision-language Model with Multi-Modal In-Context Learning
Figure 2 for MMICL: Empowering Vision-language Model with Multi-Modal In-Context Learning
Figure 3 for MMICL: Empowering Vision-language Model with Multi-Modal In-Context Learning
Figure 4 for MMICL: Empowering Vision-language Model with Multi-Modal In-Context Learning
Viaarxiv icon

CollabKG: A Learnable Human-Machine-Cooperative Information Extraction Toolkit for (Event) Knowledge Graph Construction

Jul 03, 2023
Xiang Wei, Yufeng Chen, Ning Cheng, Xingyu Cui, Jinan Xu, Wenjuan Han

Figure 1 for CollabKG: A Learnable Human-Machine-Cooperative Information Extraction Toolkit for (Event) Knowledge Graph Construction
Figure 2 for CollabKG: A Learnable Human-Machine-Cooperative Information Extraction Toolkit for (Event) Knowledge Graph Construction
Figure 3 for CollabKG: A Learnable Human-Machine-Cooperative Information Extraction Toolkit for (Event) Knowledge Graph Construction
Figure 4 for CollabKG: A Learnable Human-Machine-Cooperative Information Extraction Toolkit for (Event) Knowledge Graph Construction
Viaarxiv icon