Picture for Mingli Song

Mingli Song

Zhejiang University

Message-passing selection: Towards interpretable GNNs for graph classification

Add code
Jun 08, 2023
Viaarxiv icon

Improved Projection-free Online Continuous Submodular Maximization

Add code
May 29, 2023
Viaarxiv icon

Is Centralized Training with Decentralized Execution Framework Centralized Enough for MARL?

Add code
May 27, 2023
Figure 1 for Is Centralized Training with Decentralized Execution Framework Centralized Enough for MARL?
Figure 2 for Is Centralized Training with Decentralized Execution Framework Centralized Enough for MARL?
Figure 3 for Is Centralized Training with Decentralized Execution Framework Centralized Enough for MARL?
Figure 4 for Is Centralized Training with Decentralized Execution Framework Centralized Enough for MARL?
Viaarxiv icon

Non-stationary Online Convex Optimization with Arbitrary Delays

Add code
May 20, 2023
Viaarxiv icon

ViT-Calibrator: Decision Stream Calibration for Vision Transformer

Add code
May 05, 2023
Viaarxiv icon

Temporal Aggregation and Propagation Graph Neural Networks for Dynamic Representation

Add code
Apr 15, 2023
Viaarxiv icon

Transition Propagation Graph Neural Networks for Temporal Networks

Add code
Apr 15, 2023
Viaarxiv icon

Life Regression based Patch Slimming for Vision Transformers

Add code
Apr 11, 2023
Viaarxiv icon

Propheter: Prophetic Teacher Guided Long-Tailed Distribution Learning

Add code
Apr 09, 2023
Viaarxiv icon

Generalization Matters: Loss Minima Flattening via Parameter Hybridization for Efficient Online Knowledge Distillation

Add code
Mar 26, 2023
Figure 1 for Generalization Matters: Loss Minima Flattening via Parameter Hybridization for Efficient Online Knowledge Distillation
Figure 2 for Generalization Matters: Loss Minima Flattening via Parameter Hybridization for Efficient Online Knowledge Distillation
Figure 3 for Generalization Matters: Loss Minima Flattening via Parameter Hybridization for Efficient Online Knowledge Distillation
Figure 4 for Generalization Matters: Loss Minima Flattening via Parameter Hybridization for Efficient Online Knowledge Distillation
Viaarxiv icon