Picture for Shaoyi Huang

Shaoyi Huang

Zero-Space Cost Fault Tolerance for Transformer-based Language Models on ReRAM

Add code
Jan 22, 2024
Figure 1 for Zero-Space Cost Fault Tolerance for Transformer-based Language Models on ReRAM
Figure 2 for Zero-Space Cost Fault Tolerance for Transformer-based Language Models on ReRAM
Figure 3 for Zero-Space Cost Fault Tolerance for Transformer-based Language Models on ReRAM
Figure 4 for Zero-Space Cost Fault Tolerance for Transformer-based Language Models on ReRAM
Viaarxiv icon

MaxK-GNN: Towards Theoretical Speed Limits for Accelerating Graph Neural Networks Training

Add code
Dec 18, 2023
Figure 1 for MaxK-GNN: Towards Theoretical Speed Limits for Accelerating Graph Neural Networks Training
Figure 2 for MaxK-GNN: Towards Theoretical Speed Limits for Accelerating Graph Neural Networks Training
Figure 3 for MaxK-GNN: Towards Theoretical Speed Limits for Accelerating Graph Neural Networks Training
Figure 4 for MaxK-GNN: Towards Theoretical Speed Limits for Accelerating Graph Neural Networks Training
Viaarxiv icon

LinGCN: Structural Linearized Graph Convolutional Network for Homomorphically Encrypted Inference

Add code
Sep 30, 2023
Figure 1 for LinGCN: Structural Linearized Graph Convolutional Network for Homomorphically Encrypted Inference
Figure 2 for LinGCN: Structural Linearized Graph Convolutional Network for Homomorphically Encrypted Inference
Figure 3 for LinGCN: Structural Linearized Graph Convolutional Network for Homomorphically Encrypted Inference
Figure 4 for LinGCN: Structural Linearized Graph Convolutional Network for Homomorphically Encrypted Inference
Viaarxiv icon

Accel-GCN: High-Performance GPU Accelerator Design for Graph Convolution Networks

Add code
Aug 22, 2023
Viaarxiv icon

AutoReP: Automatic ReLU Replacement for Fast Private Network Inference

Add code
Aug 20, 2023
Figure 1 for AutoReP: Automatic ReLU Replacement for Fast Private Network Inference
Figure 2 for AutoReP: Automatic ReLU Replacement for Fast Private Network Inference
Figure 3 for AutoReP: Automatic ReLU Replacement for Fast Private Network Inference
Figure 4 for AutoReP: Automatic ReLU Replacement for Fast Private Network Inference
Viaarxiv icon

Neurogenesis Dynamics-inspired Spiking Neural Network Training Acceleration

Add code
Apr 24, 2023
Figure 1 for Neurogenesis Dynamics-inspired Spiking Neural Network Training Acceleration
Figure 2 for Neurogenesis Dynamics-inspired Spiking Neural Network Training Acceleration
Figure 3 for Neurogenesis Dynamics-inspired Spiking Neural Network Training Acceleration
Figure 4 for Neurogenesis Dynamics-inspired Spiking Neural Network Training Acceleration
Viaarxiv icon

RRNet: Towards ReLU-Reduced Neural Network for Two-party Computation Based Private Inference

Add code
Feb 22, 2023
Figure 1 for RRNet: Towards ReLU-Reduced Neural Network for Two-party Computation Based Private Inference
Figure 2 for RRNet: Towards ReLU-Reduced Neural Network for Two-party Computation Based Private Inference
Figure 3 for RRNet: Towards ReLU-Reduced Neural Network for Two-party Computation Based Private Inference
Figure 4 for RRNet: Towards ReLU-Reduced Neural Network for Two-party Computation Based Private Inference
Viaarxiv icon

Dynamic Sparse Training via More Exploration

Add code
Dec 14, 2022
Figure 1 for Dynamic Sparse Training via More Exploration
Figure 2 for Dynamic Sparse Training via More Exploration
Figure 3 for Dynamic Sparse Training via More Exploration
Figure 4 for Dynamic Sparse Training via More Exploration
Viaarxiv icon

Efficient Traffic State Forecasting using Spatio-Temporal Network Dependencies: A Sparse Graph Neural Network Approach

Add code
Nov 06, 2022
Figure 1 for Efficient Traffic State Forecasting using Spatio-Temporal Network Dependencies: A Sparse Graph Neural Network Approach
Figure 2 for Efficient Traffic State Forecasting using Spatio-Temporal Network Dependencies: A Sparse Graph Neural Network Approach
Figure 3 for Efficient Traffic State Forecasting using Spatio-Temporal Network Dependencies: A Sparse Graph Neural Network Approach
Figure 4 for Efficient Traffic State Forecasting using Spatio-Temporal Network Dependencies: A Sparse Graph Neural Network Approach
Viaarxiv icon

PolyMPCNet: Towards ReLU-free Neural Architecture Search in Two-party Computation Based Private Inference

Add code
Sep 20, 2022
Figure 1 for PolyMPCNet: Towards ReLU-free Neural Architecture Search in Two-party Computation Based Private Inference
Figure 2 for PolyMPCNet: Towards ReLU-free Neural Architecture Search in Two-party Computation Based Private Inference
Figure 3 for PolyMPCNet: Towards ReLU-free Neural Architecture Search in Two-party Computation Based Private Inference
Figure 4 for PolyMPCNet: Towards ReLU-free Neural Architecture Search in Two-party Computation Based Private Inference
Viaarxiv icon