Alert button
Picture for Jiawei Wang

Jiawei Wang

Alert button

Feasibility of Local Trajectory Planning for Level-2+ Semi-autonomous Driving without Absolute Localization

Sep 06, 2023
Sheng Zhu, Jiawei Wang, Yu Yang, Bilin Aksun-Guvenc

Figure 1 for Feasibility of Local Trajectory Planning for Level-2+ Semi-autonomous Driving without Absolute Localization
Figure 2 for Feasibility of Local Trajectory Planning for Level-2+ Semi-autonomous Driving without Absolute Localization
Figure 3 for Feasibility of Local Trajectory Planning for Level-2+ Semi-autonomous Driving without Absolute Localization
Figure 4 for Feasibility of Local Trajectory Planning for Level-2+ Semi-autonomous Driving without Absolute Localization
Viaarxiv icon

Mixture-of-Domain-Adapters: Decoupling and Injecting Domain Knowledge to Pre-trained Language Models Memories

Jun 08, 2023
Shizhe Diao, Tianyang Xu, Ruijia Xu, Jiawei Wang, Tong Zhang

Figure 1 for Mixture-of-Domain-Adapters: Decoupling and Injecting Domain Knowledge to Pre-trained Language Models Memories
Figure 2 for Mixture-of-Domain-Adapters: Decoupling and Injecting Domain Knowledge to Pre-trained Language Models Memories
Figure 3 for Mixture-of-Domain-Adapters: Decoupling and Injecting Domain Knowledge to Pre-trained Language Models Memories
Figure 4 for Mixture-of-Domain-Adapters: Decoupling and Injecting Domain Knowledge to Pre-trained Language Models Memories
Viaarxiv icon

Towards Effective and Interpretable Human-Agent Collaboration in MOBA Games: A Communication Perspective

Apr 23, 2023
Yiming Gao, Feiyu Liu, Liang Wang, Zhenjie Lian, Weixuan Wang, Siqin Li, Xianliang Wang, Xianhan Zeng, Rundong Wang, Jiawei Wang, Qiang Fu, Wei Yang, Lanxiao Huang, Wei Liu

Figure 1 for Towards Effective and Interpretable Human-Agent Collaboration in MOBA Games: A Communication Perspective
Figure 2 for Towards Effective and Interpretable Human-Agent Collaboration in MOBA Games: A Communication Perspective
Figure 3 for Towards Effective and Interpretable Human-Agent Collaboration in MOBA Games: A Communication Perspective
Figure 4 for Towards Effective and Interpretable Human-Agent Collaboration in MOBA Games: A Communication Perspective
Viaarxiv icon

Robust Table Structure Recognition with Dynamic Queries Enhanced Detection Transformer

Mar 21, 2023
Jiawei Wang, Weihong Lin, Chixiang Ma, Mingze Li, Zheng Sun, Lei Sun, Qiang Huo

Figure 1 for Robust Table Structure Recognition with Dynamic Queries Enhanced Detection Transformer
Figure 2 for Robust Table Structure Recognition with Dynamic Queries Enhanced Detection Transformer
Figure 3 for Robust Table Structure Recognition with Dynamic Queries Enhanced Detection Transformer
Figure 4 for Robust Table Structure Recognition with Dynamic Queries Enhanced Detection Transformer
Viaarxiv icon

Mixed Cloud Control Testbed: Validating Vehicle-Road-Cloud Integration via Mixed Digital Twin

Dec 05, 2022
Jianghong Dong, Qing Xu, Jiawei Wang, Chunying Yang, Mengchi Cai, Chaoyi Chen, Jianqiang Wang, Keqiang Li

Figure 1 for Mixed Cloud Control Testbed: Validating Vehicle-Road-Cloud Integration via Mixed Digital Twin
Figure 2 for Mixed Cloud Control Testbed: Validating Vehicle-Road-Cloud Integration via Mixed Digital Twin
Figure 3 for Mixed Cloud Control Testbed: Validating Vehicle-Road-Cloud Integration via Mixed Digital Twin
Figure 4 for Mixed Cloud Control Testbed: Validating Vehicle-Road-Cloud Integration via Mixed Digital Twin
Viaarxiv icon

X$^2$-VLM: All-In-One Pre-trained Model For Vision-Language Tasks

Nov 22, 2022
Yan Zeng, Xinsong Zhang, Hang Li, Jiawei Wang, Jipeng Zhang, Wangchunshu Zhou

Figure 1 for X$^2$-VLM: All-In-One Pre-trained Model For Vision-Language Tasks
Figure 2 for X$^2$-VLM: All-In-One Pre-trained Model For Vision-Language Tasks
Figure 3 for X$^2$-VLM: All-In-One Pre-trained Model For Vision-Language Tasks
Figure 4 for X$^2$-VLM: All-In-One Pre-trained Model For Vision-Language Tasks
Viaarxiv icon

TSRFormer: Table Structure Recognition with Transformers

Aug 09, 2022
Weihong Lin, Zheng Sun, Chixiang Ma, Mingze Li, Jiawei Wang, Lei Sun, Qiang Huo

Figure 1 for TSRFormer: Table Structure Recognition with Transformers
Figure 2 for TSRFormer: Table Structure Recognition with Transformers
Figure 3 for TSRFormer: Table Structure Recognition with Transformers
Figure 4 for TSRFormer: Table Structure Recognition with Transformers
Viaarxiv icon

Prefix Language Models are Unified Modal Learners

Jun 15, 2022
Shizhe Diao, Wangchunshu Zhou, Xinsong Zhang, Jiawei Wang

Figure 1 for Prefix Language Models are Unified Modal Learners
Figure 2 for Prefix Language Models are Unified Modal Learners
Figure 3 for Prefix Language Models are Unified Modal Learners
Figure 4 for Prefix Language Models are Unified Modal Learners
Viaarxiv icon

CODE-MVP: Learning to Represent Source Code from Multiple Views with Contrastive Pre-Training

May 04, 2022
Xin Wang, Yasheng Wang, Yao Wan, Jiawei Wang, Pingyi Zhou, Li Li, Hao Wu, Jin Liu

Figure 1 for CODE-MVP: Learning to Represent Source Code from Multiple Views with Contrastive Pre-Training
Figure 2 for CODE-MVP: Learning to Represent Source Code from Multiple Views with Contrastive Pre-Training
Figure 3 for CODE-MVP: Learning to Represent Source Code from Multiple Views with Contrastive Pre-Training
Figure 4 for CODE-MVP: Learning to Represent Source Code from Multiple Views with Contrastive Pre-Training
Viaarxiv icon

ArT: All-round Thinker for Unsupervised Commonsense Question-Answering

Dec 26, 2021
Jiawei Wang, Hai Zhao

Figure 1 for ArT: All-round Thinker for Unsupervised Commonsense Question-Answering
Figure 2 for ArT: All-round Thinker for Unsupervised Commonsense Question-Answering
Figure 3 for ArT: All-round Thinker for Unsupervised Commonsense Question-Answering
Figure 4 for ArT: All-round Thinker for Unsupervised Commonsense Question-Answering
Viaarxiv icon