Picture for Guangxiang Zhao

Guangxiang Zhao

Well-classified Examples are Underestimated in Classification with Deep Neural Networks

Add code
Oct 15, 2021
Figure 1 for Well-classified Examples are Underestimated in Classification with Deep Neural Networks
Figure 2 for Well-classified Examples are Underestimated in Classification with Deep Neural Networks
Figure 3 for Well-classified Examples are Underestimated in Classification with Deep Neural Networks
Figure 4 for Well-classified Examples are Underestimated in Classification with Deep Neural Networks
Viaarxiv icon

Topology-Imbalance Learning for Semi-Supervised Node Classification

Add code
Oct 08, 2021
Figure 1 for Topology-Imbalance Learning for Semi-Supervised Node Classification
Figure 2 for Topology-Imbalance Learning for Semi-Supervised Node Classification
Figure 3 for Topology-Imbalance Learning for Semi-Supervised Node Classification
Figure 4 for Topology-Imbalance Learning for Semi-Supervised Node Classification
Viaarxiv icon

Learning Relation Alignment for Calibrated Cross-modal Retrieval

Add code
Jun 01, 2021
Figure 1 for Learning Relation Alignment for Calibrated Cross-modal Retrieval
Figure 2 for Learning Relation Alignment for Calibrated Cross-modal Retrieval
Figure 3 for Learning Relation Alignment for Calibrated Cross-modal Retrieval
Figure 4 for Learning Relation Alignment for Calibrated Cross-modal Retrieval
Viaarxiv icon

Layer-Wise Cross-View Decoding for Sequence-to-Sequence Learning

Add code
Jun 03, 2020
Figure 1 for Layer-Wise Cross-View Decoding for Sequence-to-Sequence Learning
Figure 2 for Layer-Wise Cross-View Decoding for Sequence-to-Sequence Learning
Figure 3 for Layer-Wise Cross-View Decoding for Sequence-to-Sequence Learning
Figure 4 for Layer-Wise Cross-View Decoding for Sequence-to-Sequence Learning
Viaarxiv icon

Explicit Sparse Transformer: Concentrated Attention Through Explicit Selection

Add code
Dec 25, 2019
Figure 1 for Explicit Sparse Transformer: Concentrated Attention Through Explicit Selection
Figure 2 for Explicit Sparse Transformer: Concentrated Attention Through Explicit Selection
Figure 3 for Explicit Sparse Transformer: Concentrated Attention Through Explicit Selection
Figure 4 for Explicit Sparse Transformer: Concentrated Attention Through Explicit Selection
Viaarxiv icon

MUSE: Parallel Multi-Scale Attention for Sequence to Sequence Learning

Add code
Nov 17, 2019
Figure 1 for MUSE: Parallel Multi-Scale Attention for Sequence to Sequence Learning
Figure 2 for MUSE: Parallel Multi-Scale Attention for Sequence to Sequence Learning
Figure 3 for MUSE: Parallel Multi-Scale Attention for Sequence to Sequence Learning
Figure 4 for MUSE: Parallel Multi-Scale Attention for Sequence to Sequence Learning
Viaarxiv icon

Understanding and Improving Layer Normalization

Add code
Nov 16, 2019
Figure 1 for Understanding and Improving Layer Normalization
Figure 2 for Understanding and Improving Layer Normalization
Figure 3 for Understanding and Improving Layer Normalization
Figure 4 for Understanding and Improving Layer Normalization
Viaarxiv icon

Review-Driven Multi-Label Music Style Classification by Exploiting Style Correlations

Add code
Aug 23, 2018
Figure 1 for Review-Driven Multi-Label Music Style Classification by Exploiting Style Correlations
Figure 2 for Review-Driven Multi-Label Music Style Classification by Exploiting Style Correlations
Figure 3 for Review-Driven Multi-Label Music Style Classification by Exploiting Style Correlations
Figure 4 for Review-Driven Multi-Label Music Style Classification by Exploiting Style Correlations
Viaarxiv icon