Picture for Xiaohan Chen

Xiaohan Chen

Chasing Better Deep Image Priors between Over- and Under-parameterization

Add code
Oct 31, 2024
Figure 1 for Chasing Better Deep Image Priors between Over- and Under-parameterization
Figure 2 for Chasing Better Deep Image Priors between Over- and Under-parameterization
Figure 3 for Chasing Better Deep Image Priors between Over- and Under-parameterization
Figure 4 for Chasing Better Deep Image Priors between Over- and Under-parameterization
Viaarxiv icon

Expressive Power of Graph Neural Networks for Quadratic Programs

Add code
Jun 09, 2024
Figure 1 for Expressive Power of Graph Neural Networks for  Quadratic Programs
Figure 2 for Expressive Power of Graph Neural Networks for  Quadratic Programs
Figure 3 for Expressive Power of Graph Neural Networks for  Quadratic Programs
Figure 4 for Expressive Power of Graph Neural Networks for  Quadratic Programs
Viaarxiv icon

Learning to optimize: A tutorial for continuous and mixed-integer optimization

Add code
May 24, 2024
Viaarxiv icon

Rethinking the Capacity of Graph Neural Networks for Branching Strategy

Add code
Feb 11, 2024
Figure 1 for Rethinking the Capacity of Graph Neural Networks for Branching Strategy
Viaarxiv icon

DIG-MILP: a Deep Instance Generator for Mixed-Integer Linear Programming with Feasibility Guarantee

Add code
Oct 20, 2023
Figure 1 for DIG-MILP: a Deep Instance Generator for Mixed-Integer Linear Programming with Feasibility Guarantee
Figure 2 for DIG-MILP: a Deep Instance Generator for Mixed-Integer Linear Programming with Feasibility Guarantee
Figure 3 for DIG-MILP: a Deep Instance Generator for Mixed-Integer Linear Programming with Feasibility Guarantee
Figure 4 for DIG-MILP: a Deep Instance Generator for Mixed-Integer Linear Programming with Feasibility Guarantee
Viaarxiv icon

Towards Constituting Mathematical Structures for Learning to Optimize

Add code
May 29, 2023
Viaarxiv icon

More ConvNets in the 2020s: Scaling up Kernels Beyond 51x51 using Sparsity

Add code
Jul 07, 2022
Figure 1 for More ConvNets in the 2020s: Scaling up Kernels Beyond 51x51 using Sparsity
Figure 2 for More ConvNets in the 2020s: Scaling up Kernels Beyond 51x51 using Sparsity
Figure 3 for More ConvNets in the 2020s: Scaling up Kernels Beyond 51x51 using Sparsity
Figure 4 for More ConvNets in the 2020s: Scaling up Kernels Beyond 51x51 using Sparsity
Viaarxiv icon

The Unreasonable Effectiveness of Random Pruning: Return of the Most Naive Baseline for Sparse Training

Add code
Feb 05, 2022
Figure 1 for The Unreasonable Effectiveness of Random Pruning: Return of the Most Naive Baseline for Sparse Training
Figure 2 for The Unreasonable Effectiveness of Random Pruning: Return of the Most Naive Baseline for Sparse Training
Figure 3 for The Unreasonable Effectiveness of Random Pruning: Return of the Most Naive Baseline for Sparse Training
Figure 4 for The Unreasonable Effectiveness of Random Pruning: Return of the Most Naive Baseline for Sparse Training
Viaarxiv icon

Federated Dynamic Sparse Training: Computing Less, Communicating Less, Yet Learning Better

Add code
Dec 18, 2021
Figure 1 for Federated Dynamic Sparse Training: Computing Less, Communicating Less, Yet Learning Better
Figure 2 for Federated Dynamic Sparse Training: Computing Less, Communicating Less, Yet Learning Better
Figure 3 for Federated Dynamic Sparse Training: Computing Less, Communicating Less, Yet Learning Better
Figure 4 for Federated Dynamic Sparse Training: Computing Less, Communicating Less, Yet Learning Better
Viaarxiv icon

Hyperparameter Tuning is All You Need for LISTA

Add code
Oct 29, 2021
Figure 1 for Hyperparameter Tuning is All You Need for LISTA
Figure 2 for Hyperparameter Tuning is All You Need for LISTA
Figure 3 for Hyperparameter Tuning is All You Need for LISTA
Figure 4 for Hyperparameter Tuning is All You Need for LISTA
Viaarxiv icon