Picture for Shuheng Shen

Shuheng Shen

E-ANT: A Large-Scale Dataset for Efficient Automatic GUI NavigaTion

Add code
Jun 20, 2024
Viaarxiv icon

Revisiting Modularity Maximization for Graph Clustering: A Contrastive Learning Perspective

Add code
Jun 20, 2024
Viaarxiv icon

Clean-image Backdoor Attacks

Add code
Mar 26, 2024
Viaarxiv icon

Joint Local Relational Augmentation and Global Nash Equilibrium for Federated Learning with Non-IID Data

Add code
Aug 17, 2023
Figure 1 for Joint Local Relational Augmentation and Global Nash Equilibrium for Federated Learning with Non-IID Data
Figure 2 for Joint Local Relational Augmentation and Global Nash Equilibrium for Federated Learning with Non-IID Data
Figure 3 for Joint Local Relational Augmentation and Global Nash Equilibrium for Federated Learning with Non-IID Data
Figure 4 for Joint Local Relational Augmentation and Global Nash Equilibrium for Federated Learning with Non-IID Data
Viaarxiv icon

Differentially Private Learning with Per-Sample Adaptive Clipping

Add code
Dec 01, 2022
Figure 1 for Differentially Private Learning with Per-Sample Adaptive Clipping
Figure 2 for Differentially Private Learning with Per-Sample Adaptive Clipping
Figure 3 for Differentially Private Learning with Per-Sample Adaptive Clipping
Figure 4 for Differentially Private Learning with Per-Sample Adaptive Clipping
Viaarxiv icon

Deep Unified Representation for Heterogeneous Recommendation

Add code
Jan 26, 2022
Figure 1 for Deep Unified Representation for Heterogeneous Recommendation
Figure 2 for Deep Unified Representation for Heterogeneous Recommendation
Figure 3 for Deep Unified Representation for Heterogeneous Recommendation
Figure 4 for Deep Unified Representation for Heterogeneous Recommendation
Viaarxiv icon

STL-SGD: Speeding Up Local SGD with Stagewise Communication Period

Add code
Jun 11, 2020
Figure 1 for STL-SGD: Speeding Up Local SGD with Stagewise Communication Period
Figure 2 for STL-SGD: Speeding Up Local SGD with Stagewise Communication Period
Figure 3 for STL-SGD: Speeding Up Local SGD with Stagewise Communication Period
Figure 4 for STL-SGD: Speeding Up Local SGD with Stagewise Communication Period
Viaarxiv icon

Variance Reduced Local SGD with Lower Communication Complexity

Add code
Dec 30, 2019
Figure 1 for Variance Reduced Local SGD with Lower Communication Complexity
Figure 2 for Variance Reduced Local SGD with Lower Communication Complexity
Figure 3 for Variance Reduced Local SGD with Lower Communication Complexity
Figure 4 for Variance Reduced Local SGD with Lower Communication Complexity
Viaarxiv icon

Faster Distributed Deep Net Training: Computation and Communication Decoupled Stochastic Gradient Descent

Add code
Jun 28, 2019
Figure 1 for Faster Distributed Deep Net Training: Computation and Communication Decoupled Stochastic Gradient Descent
Figure 2 for Faster Distributed Deep Net Training: Computation and Communication Decoupled Stochastic Gradient Descent
Figure 3 for Faster Distributed Deep Net Training: Computation and Communication Decoupled Stochastic Gradient Descent
Figure 4 for Faster Distributed Deep Net Training: Computation and Communication Decoupled Stochastic Gradient Descent
Viaarxiv icon

Asynchronous Stochastic Composition Optimization with Variance Reduction

Add code
Nov 15, 2018
Figure 1 for Asynchronous Stochastic Composition Optimization with Variance Reduction
Figure 2 for Asynchronous Stochastic Composition Optimization with Variance Reduction
Figure 3 for Asynchronous Stochastic Composition Optimization with Variance Reduction
Figure 4 for Asynchronous Stochastic Composition Optimization with Variance Reduction
Viaarxiv icon