Alert button
Picture for Quan Gan

Quan Gan

Alert button

Continuous Sign Language Recognition Based on Motor attention mechanism and frame-level Self-distillation

Feb 29, 2024
Qidan Zhu, Jing Li, Fei Yuan, Quan Gan

Viaarxiv icon

GFS: Graph-based Feature Synthesis for Prediction over Relational Databases

Dec 04, 2023
Han Zhang, Quan Gan, David Wipf, Weinan Zhang

Viaarxiv icon

GNNFlow: A Distributed Framework for Continuous Temporal GNN Learning on Dynamic Graphs

Nov 30, 2023
Yuchen Zhong, Guangming Sheng, Tianzuo Qin, Minjie Wang, Quan Gan, Chuan Wu

Viaarxiv icon

Efficient Link Prediction via GNN Layers Induced by Negative Sampling

Oct 14, 2023
Yuxin Wang, Xiannian Hu, Quan Gan, Xuanjing Huang, Xipeng Qiu, David Wipf

Viaarxiv icon

From Hypergraph Energy Functions to Hypergraph Neural Networks

Jun 19, 2023
Yuxin Wang, Quan Gan, Xipeng Qiu, Xuanjing Huang, David Wipf

Figure 1 for From Hypergraph Energy Functions to Hypergraph Neural Networks
Figure 2 for From Hypergraph Energy Functions to Hypergraph Neural Networks
Figure 3 for From Hypergraph Energy Functions to Hypergraph Neural Networks
Figure 4 for From Hypergraph Energy Functions to Hypergraph Neural Networks
Viaarxiv icon

Continuous sign language recognition based on cross-resolution knowledge distillation

Mar 13, 2023
Qidan Zhu, Jing Li, Fei Yuan, Quan Gan

Figure 1 for Continuous sign language recognition based on cross-resolution knowledge distillation
Figure 2 for Continuous sign language recognition based on cross-resolution knowledge distillation
Figure 3 for Continuous sign language recognition based on cross-resolution knowledge distillation
Figure 4 for Continuous sign language recognition based on cross-resolution knowledge distillation
Viaarxiv icon

ReFresh: Reducing Memory Access from Exploiting Stable Historical Embeddings for Graph Neural Network Training

Jan 19, 2023
Kezhao Huang, Haitian Jiang, Minjie Wang, Guangxuan Xiao, David Wipf, Xiang Song, Quan Gan, Zengfeng Huang, Jidong Zhai, Zheng Zhang

Figure 1 for ReFresh: Reducing Memory Access from Exploiting Stable Historical Embeddings for Graph Neural Network Training
Figure 2 for ReFresh: Reducing Memory Access from Exploiting Stable Historical Embeddings for Graph Neural Network Training
Figure 3 for ReFresh: Reducing Memory Access from Exploiting Stable Historical Embeddings for Graph Neural Network Training
Figure 4 for ReFresh: Reducing Memory Access from Exploiting Stable Historical Embeddings for Graph Neural Network Training
Viaarxiv icon

Refined Edge Usage of Graph Neural Networks for Edge Prediction

Dec 25, 2022
Jiarui Jin, Yangkun Wang, Weinan Zhang, Quan Gan, Xiang Song, Yong Yu, Zheng Zhang, David Wipf

Figure 1 for Refined Edge Usage of Graph Neural Networks for Edge Prediction
Figure 2 for Refined Edge Usage of Graph Neural Networks for Edge Prediction
Figure 3 for Refined Edge Usage of Graph Neural Networks for Edge Prediction
Figure 4 for Refined Edge Usage of Graph Neural Networks for Edge Prediction
Viaarxiv icon

Temporal superimposed crossover module for effective continuous sign language

Nov 07, 2022
Qidan Zhu, Jing Li, Fei Yuan, Quan Gan

Figure 1 for Temporal superimposed crossover module for effective continuous sign language
Figure 2 for Temporal superimposed crossover module for effective continuous sign language
Figure 3 for Temporal superimposed crossover module for effective continuous sign language
Figure 4 for Temporal superimposed crossover module for effective continuous sign language
Viaarxiv icon