Alert button
Picture for Xiaoli Wang

Xiaoli Wang

Alert button

Rethinking Multi-view Representation Learning via Distilled Disentangling

Add code
Bookmark button
Alert button
Mar 29, 2024
Guanzhou Ke, Bo Wang, Xiaoli Wang, Shengfeng He

Figure 1 for Rethinking Multi-view Representation Learning via Distilled Disentangling
Figure 2 for Rethinking Multi-view Representation Learning via Distilled Disentangling
Figure 3 for Rethinking Multi-view Representation Learning via Distilled Disentangling
Figure 4 for Rethinking Multi-view Representation Learning via Distilled Disentangling
Viaarxiv icon

Fine-tuning Large Language Models for Domain-specific Machine Translation

Add code
Bookmark button
Alert button
Feb 23, 2024
Jiawei Zheng, Hanghai Hong, Xiaoli Wang, Jingsong Su, Yonggui Liang, Shikai Wu

Viaarxiv icon

BESTMVQA: A Benchmark Evaluation System for Medical Visual Question Answering

Add code
Bookmark button
Alert button
Dec 13, 2023
Xiaojie Hong, Zixin Song, Liangzhi Li, Xiaoli Wang, Feiyan Liu

Viaarxiv icon

Towards Better Multi-modal Keyphrase Generation via Visual Entity Enhancement and Multi-granularity Image Noise Filtering

Add code
Bookmark button
Alert button
Sep 09, 2023
Yifan Dong, Suhang Wu, Fandong Meng, Jie Zhou, Xiaoli Wang, Jianxin Lin, Jinsong Su

Figure 1 for Towards Better Multi-modal Keyphrase Generation via Visual Entity Enhancement and Multi-granularity Image Noise Filtering
Figure 2 for Towards Better Multi-modal Keyphrase Generation via Visual Entity Enhancement and Multi-granularity Image Noise Filtering
Figure 3 for Towards Better Multi-modal Keyphrase Generation via Visual Entity Enhancement and Multi-granularity Image Noise Filtering
Figure 4 for Towards Better Multi-modal Keyphrase Generation via Visual Entity Enhancement and Multi-granularity Image Noise Filtering
Viaarxiv icon

Disentangling Multi-view Representations Beyond Inductive Bias

Add code
Bookmark button
Alert button
Aug 04, 2023
Guanzhou Ke, Yang Yu, Guoqing Chao, Xiaoli Wang, Chenyang Xu, Shengfeng He

Figure 1 for Disentangling Multi-view Representations Beyond Inductive Bias
Figure 2 for Disentangling Multi-view Representations Beyond Inductive Bias
Figure 3 for Disentangling Multi-view Representations Beyond Inductive Bias
Figure 4 for Disentangling Multi-view Representations Beyond Inductive Bias
Viaarxiv icon

ConKI: Contrastive Knowledge Injection for Multimodal Sentiment Analysis

Add code
Bookmark button
Alert button
Jun 27, 2023
Yakun Yu, Mingjun Zhao, Shi-ang Qi, Feiran Sun, Baoxun Wang, Weidong Guo, Xiaoli Wang, Lei Yang, Di Niu

Figure 1 for ConKI: Contrastive Knowledge Injection for Multimodal Sentiment Analysis
Figure 2 for ConKI: Contrastive Knowledge Injection for Multimodal Sentiment Analysis
Figure 3 for ConKI: Contrastive Knowledge Injection for Multimodal Sentiment Analysis
Figure 4 for ConKI: Contrastive Knowledge Injection for Multimodal Sentiment Analysis
Viaarxiv icon

A Sequence-to-Sequence&Set Model for Text-to-Table Generation

Add code
Bookmark button
Alert button
May 31, 2023
Tong Li, Zhihao Wang, Liangying Shao, Xuling Zheng, Xiaoli Wang, Jinsong Su

Figure 1 for A Sequence-to-Sequence&Set Model for Text-to-Table Generation
Figure 2 for A Sequence-to-Sequence&Set Model for Text-to-Table Generation
Figure 3 for A Sequence-to-Sequence&Set Model for Text-to-Table Generation
Figure 4 for A Sequence-to-Sequence&Set Model for Text-to-Table Generation
Viaarxiv icon

PRIMP: PRobabilistically-Informed Motion Primitives for Efficient Affordance Learning from Demonstration

Add code
Bookmark button
Alert button
May 25, 2023
Sipu Ruan, Weixiao Liu, Xiaoli Wang, Xin Meng, Gregory S. Chirikjian

Figure 1 for PRIMP: PRobabilistically-Informed Motion Primitives for Efficient Affordance Learning from Demonstration
Figure 2 for PRIMP: PRobabilistically-Informed Motion Primitives for Efficient Affordance Learning from Demonstration
Figure 3 for PRIMP: PRobabilistically-Informed Motion Primitives for Efficient Affordance Learning from Demonstration
Figure 4 for PRIMP: PRobabilistically-Informed Motion Primitives for Efficient Affordance Learning from Demonstration
Viaarxiv icon