Alert button
Picture for Xiaoshan Yang

Xiaoshan Yang

Alert button

Exploring Multi-Modal Contextual Knowledge for Open-Vocabulary Object Detection

Add code
Bookmark button
Alert button
Aug 30, 2023
Yifan Xu, Mengdan Zhang, Xiaoshan Yang, Changsheng Xu

Figure 1 for Exploring Multi-Modal Contextual Knowledge for Open-Vocabulary Object Detection
Figure 2 for Exploring Multi-Modal Contextual Knowledge for Open-Vocabulary Object Detection
Figure 3 for Exploring Multi-Modal Contextual Knowledge for Open-Vocabulary Object Detection
Figure 4 for Exploring Multi-Modal Contextual Knowledge for Open-Vocabulary Object Detection
Viaarxiv icon

Multi-modal Queried Object Detection in the Wild

Add code
Bookmark button
Alert button
May 30, 2023
Yifan Xu, Mengdan Zhang, Chaoyou Fu, Peixian Chen, Xiaoshan Yang, Ke Li, Changsheng Xu

Figure 1 for Multi-modal Queried Object Detection in the Wild
Figure 2 for Multi-modal Queried Object Detection in the Wild
Figure 3 for Multi-modal Queried Object Detection in the Wild
Figure 4 for Multi-modal Queried Object Detection in the Wild
Viaarxiv icon

CLIP-VG: Self-paced Curriculum Adapting of CLIP via Exploiting Pseudo-Language Labels for Visual Grounding

Add code
Bookmark button
Alert button
May 15, 2023
Linhui Xiao, Xiaoshan Yang, Fang Peng, Ming Yan, Yaowei Wang, Changsheng Xu

Figure 1 for CLIP-VG: Self-paced Curriculum Adapting of CLIP via Exploiting Pseudo-Language Labels for Visual Grounding
Figure 2 for CLIP-VG: Self-paced Curriculum Adapting of CLIP via Exploiting Pseudo-Language Labels for Visual Grounding
Figure 3 for CLIP-VG: Self-paced Curriculum Adapting of CLIP via Exploiting Pseudo-Language Labels for Visual Grounding
Figure 4 for CLIP-VG: Self-paced Curriculum Adapting of CLIP via Exploiting Pseudo-Language Labels for Visual Grounding
Viaarxiv icon

SgVA-CLIP: Semantic-guided Visual Adapting of Vision-Language Models for Few-shot Image Classification

Add code
Bookmark button
Alert button
Nov 28, 2022
Fang Peng, Xiaoshan Yang, Changsheng Xu

Figure 1 for SgVA-CLIP: Semantic-guided Visual Adapting of Vision-Language Models for Few-shot Image Classification
Figure 2 for SgVA-CLIP: Semantic-guided Visual Adapting of Vision-Language Models for Few-shot Image Classification
Figure 3 for SgVA-CLIP: Semantic-guided Visual Adapting of Vision-Language Models for Few-shot Image Classification
Figure 4 for SgVA-CLIP: Semantic-guided Visual Adapting of Vision-Language Models for Few-shot Image Classification
Viaarxiv icon

Shifting More Attention to Visual Backbone: Query-modulated Refinement Networks for End-to-End Visual Grounding

Add code
Bookmark button
Alert button
Mar 29, 2022
Jiabo Ye, Junfeng Tian, Ming Yan, Xiaoshan Yang, Xuwu Wang, Ji Zhang, Liang He, Xin Lin

Figure 1 for Shifting More Attention to Visual Backbone: Query-modulated Refinement Networks for End-to-End Visual Grounding
Figure 2 for Shifting More Attention to Visual Backbone: Query-modulated Refinement Networks for End-to-End Visual Grounding
Figure 3 for Shifting More Attention to Visual Backbone: Query-modulated Refinement Networks for End-to-End Visual Grounding
Figure 4 for Shifting More Attention to Visual Backbone: Query-modulated Refinement Networks for End-to-End Visual Grounding
Viaarxiv icon

Dynamic Hypergraph Convolutional Networks for Skeleton-Based Action Recognition

Add code
Bookmark button
Alert button
Dec 20, 2021
Jinfeng Wei, Yunxin Wang, Mengli Guo, Pei Lv, Xiaoshan Yang, Mingliang Xu

Figure 1 for Dynamic Hypergraph Convolutional Networks for Skeleton-Based Action Recognition
Figure 2 for Dynamic Hypergraph Convolutional Networks for Skeleton-Based Action Recognition
Figure 3 for Dynamic Hypergraph Convolutional Networks for Skeleton-Based Action Recognition
Figure 4 for Dynamic Hypergraph Convolutional Networks for Skeleton-Based Action Recognition
Viaarxiv icon

ECKPN: Explicit Class Knowledge Propagation Network for Transductive Few-shot Learning

Add code
Bookmark button
Alert button
Jun 16, 2021
Chaofan Chen, Xiaoshan Yang, Changsheng Xu, Xuhui Huang, Zhe Ma

Figure 1 for ECKPN: Explicit Class Knowledge Propagation Network for Transductive Few-shot Learning
Figure 2 for ECKPN: Explicit Class Knowledge Propagation Network for Transductive Few-shot Learning
Figure 3 for ECKPN: Explicit Class Knowledge Propagation Network for Transductive Few-shot Learning
Figure 4 for ECKPN: Explicit Class Knowledge Propagation Network for Transductive Few-shot Learning
Viaarxiv icon

Health Status Prediction with Local-Global Heterogeneous Behavior Graph

Add code
Bookmark button
Alert button
Mar 23, 2021
Xuan Ma, Xiaoshan Yang, Junyu Gao, Changsheng Xu

Figure 1 for Health Status Prediction with Local-Global Heterogeneous Behavior Graph
Figure 2 for Health Status Prediction with Local-Global Heterogeneous Behavior Graph
Figure 3 for Health Status Prediction with Local-Global Heterogeneous Behavior Graph
Figure 4 for Health Status Prediction with Local-Global Heterogeneous Behavior Graph
Viaarxiv icon

Data--driven Image Restoration with Option--driven Learning for Big and Small Astronomical Image Datasets

Add code
Bookmark button
Alert button
Nov 07, 2020
Peng Jia, Ruiyu Ning, Ruiqi Sun, Xiaoshan Yang, Dongmei Cai

Figure 1 for Data--driven Image Restoration with Option--driven Learning for Big and Small Astronomical Image Datasets
Figure 2 for Data--driven Image Restoration with Option--driven Learning for Big and Small Astronomical Image Datasets
Figure 3 for Data--driven Image Restoration with Option--driven Learning for Big and Small Astronomical Image Datasets
Figure 4 for Data--driven Image Restoration with Option--driven Learning for Big and Small Astronomical Image Datasets
Viaarxiv icon

Time-Guided High-Order Attention Model of Longitudinal Heterogeneous Healthcare Data

Add code
Bookmark button
Alert button
Nov 28, 2019
Yi Huang, Xiaoshan Yang, Changsheng Xu

Figure 1 for Time-Guided High-Order Attention Model of Longitudinal Heterogeneous Healthcare Data
Figure 2 for Time-Guided High-Order Attention Model of Longitudinal Heterogeneous Healthcare Data
Figure 3 for Time-Guided High-Order Attention Model of Longitudinal Heterogeneous Healthcare Data
Figure 4 for Time-Guided High-Order Attention Model of Longitudinal Heterogeneous Healthcare Data
Viaarxiv icon