Alert button
Picture for Xiaohan Chen

Xiaohan Chen

Alert button

Rethinking the Capacity of Graph Neural Networks for Branching Strategy

Add code
Bookmark button
Alert button
Feb 11, 2024
Ziang Chen, Jialin Liu, Xiaohan Chen, Xinshang Wang, Wotao Yin

Viaarxiv icon

DIG-MILP: a Deep Instance Generator for Mixed-Integer Linear Programming with Feasibility Guarantee

Add code
Bookmark button
Alert button
Oct 20, 2023
Haoyu Wang, Jialin Liu, Xiaohan Chen, Xinshang Wang, Pan Li, Wotao Yin

Figure 1 for DIG-MILP: a Deep Instance Generator for Mixed-Integer Linear Programming with Feasibility Guarantee
Figure 2 for DIG-MILP: a Deep Instance Generator for Mixed-Integer Linear Programming with Feasibility Guarantee
Figure 3 for DIG-MILP: a Deep Instance Generator for Mixed-Integer Linear Programming with Feasibility Guarantee
Figure 4 for DIG-MILP: a Deep Instance Generator for Mixed-Integer Linear Programming with Feasibility Guarantee
Viaarxiv icon

Towards Constituting Mathematical Structures for Learning to Optimize

Add code
Bookmark button
Alert button
May 29, 2023
Jialin Liu, Xiaohan Chen, Zhangyang Wang, Wotao Yin, HanQin Cai

Figure 1 for Towards Constituting Mathematical Structures for Learning to Optimize
Figure 2 for Towards Constituting Mathematical Structures for Learning to Optimize
Figure 3 for Towards Constituting Mathematical Structures for Learning to Optimize
Figure 4 for Towards Constituting Mathematical Structures for Learning to Optimize
Viaarxiv icon

More ConvNets in the 2020s: Scaling up Kernels Beyond 51x51 using Sparsity

Add code
Bookmark button
Alert button
Jul 07, 2022
Shiwei Liu, Tianlong Chen, Xiaohan Chen, Xuxi Chen, Qiao Xiao, Boqian Wu, Mykola Pechenizkiy, Decebal Mocanu, Zhangyang Wang

Figure 1 for More ConvNets in the 2020s: Scaling up Kernels Beyond 51x51 using Sparsity
Figure 2 for More ConvNets in the 2020s: Scaling up Kernels Beyond 51x51 using Sparsity
Figure 3 for More ConvNets in the 2020s: Scaling up Kernels Beyond 51x51 using Sparsity
Figure 4 for More ConvNets in the 2020s: Scaling up Kernels Beyond 51x51 using Sparsity
Viaarxiv icon

The Unreasonable Effectiveness of Random Pruning: Return of the Most Naive Baseline for Sparse Training

Add code
Bookmark button
Alert button
Feb 05, 2022
Shiwei Liu, Tianlong Chen, Xiaohan Chen, Li Shen, Decebal Constantin Mocanu, Zhangyang Wang, Mykola Pechenizkiy

Figure 1 for The Unreasonable Effectiveness of Random Pruning: Return of the Most Naive Baseline for Sparse Training
Figure 2 for The Unreasonable Effectiveness of Random Pruning: Return of the Most Naive Baseline for Sparse Training
Figure 3 for The Unreasonable Effectiveness of Random Pruning: Return of the Most Naive Baseline for Sparse Training
Figure 4 for The Unreasonable Effectiveness of Random Pruning: Return of the Most Naive Baseline for Sparse Training
Viaarxiv icon

Federated Dynamic Sparse Training: Computing Less, Communicating Less, Yet Learning Better

Add code
Bookmark button
Alert button
Dec 18, 2021
Sameer Bibikar, Haris Vikalo, Zhangyang Wang, Xiaohan Chen

Figure 1 for Federated Dynamic Sparse Training: Computing Less, Communicating Less, Yet Learning Better
Figure 2 for Federated Dynamic Sparse Training: Computing Less, Communicating Less, Yet Learning Better
Figure 3 for Federated Dynamic Sparse Training: Computing Less, Communicating Less, Yet Learning Better
Figure 4 for Federated Dynamic Sparse Training: Computing Less, Communicating Less, Yet Learning Better
Viaarxiv icon

Hyperparameter Tuning is All You Need for LISTA

Add code
Bookmark button
Alert button
Oct 29, 2021
Xiaohan Chen, Jialin Liu, Zhangyang Wang, Wotao Yin

Figure 1 for Hyperparameter Tuning is All You Need for LISTA
Figure 2 for Hyperparameter Tuning is All You Need for LISTA
Figure 3 for Hyperparameter Tuning is All You Need for LISTA
Figure 4 for Hyperparameter Tuning is All You Need for LISTA
Viaarxiv icon

Sanity Checks for Lottery Tickets: Does Your Winning Ticket Really Win the Jackpot?

Add code
Bookmark button
Alert button
Jul 01, 2021
Xiaolong Ma, Geng Yuan, Xuan Shen, Tianlong Chen, Xuxi Chen, Xiaohan Chen, Ning Liu, Minghai Qin, Sijia Liu, Zhangyang Wang, Yanzhi Wang

Figure 1 for Sanity Checks for Lottery Tickets: Does Your Winning Ticket Really Win the Jackpot?
Figure 2 for Sanity Checks for Lottery Tickets: Does Your Winning Ticket Really Win the Jackpot?
Figure 3 for Sanity Checks for Lottery Tickets: Does Your Winning Ticket Really Win the Jackpot?
Figure 4 for Sanity Checks for Lottery Tickets: Does Your Winning Ticket Really Win the Jackpot?
Viaarxiv icon

FreeTickets: Accurate, Robust and Efficient Deep Ensemble by Training with Dynamic Sparsity

Add code
Bookmark button
Alert button
Jun 28, 2021
Shiwei Liu, Tianlong Chen, Zahra Atashgahi, Xiaohan Chen, Ghada Sokar, Elena Mocanu, Mykola Pechenizkiy, Zhangyang Wang, Decebal Constantin Mocanu

Figure 1 for FreeTickets: Accurate, Robust and Efficient Deep Ensemble by Training with Dynamic Sparsity
Figure 2 for FreeTickets: Accurate, Robust and Efficient Deep Ensemble by Training with Dynamic Sparsity
Figure 3 for FreeTickets: Accurate, Robust and Efficient Deep Ensemble by Training with Dynamic Sparsity
Figure 4 for FreeTickets: Accurate, Robust and Efficient Deep Ensemble by Training with Dynamic Sparsity
Viaarxiv icon

Sparse Training via Boosting Pruning Plasticity with Neuroregeneration

Add code
Bookmark button
Alert button
Jun 19, 2021
Shiwei Liu, Tianlong Chen, Xiaohan Chen, Zahra Atashgahi, Lu Yin, Huanyu Kou, Li Shen, Mykola Pechenizkiy, Zhangyang Wang, Decebal Constantin Mocanu

Figure 1 for Sparse Training via Boosting Pruning Plasticity with Neuroregeneration
Figure 2 for Sparse Training via Boosting Pruning Plasticity with Neuroregeneration
Figure 3 for Sparse Training via Boosting Pruning Plasticity with Neuroregeneration
Figure 4 for Sparse Training via Boosting Pruning Plasticity with Neuroregeneration
Viaarxiv icon