Picture for Hongwu Peng

Hongwu Peng

APEER: Automatic Prompt Engineering Enhances Large Language Model Reranking

Add code
Jun 20, 2024
Viaarxiv icon

SSNet: A Lightweight Multi-Party Computation Scheme for Practical Privacy-Preserving Machine Learning Service in the Cloud

Add code
Jun 04, 2024
Figure 1 for SSNet: A Lightweight Multi-Party Computation Scheme for Practical Privacy-Preserving Machine Learning Service in the Cloud
Figure 2 for SSNet: A Lightweight Multi-Party Computation Scheme for Practical Privacy-Preserving Machine Learning Service in the Cloud
Figure 3 for SSNet: A Lightweight Multi-Party Computation Scheme for Practical Privacy-Preserving Machine Learning Service in the Cloud
Figure 4 for SSNet: A Lightweight Multi-Party Computation Scheme for Practical Privacy-Preserving Machine Learning Service in the Cloud
Viaarxiv icon

Learning from Teaching Regularization: Generalizable Correlations Should be Easy to Imitate

Add code
Feb 05, 2024
Viaarxiv icon

Zero-Space Cost Fault Tolerance for Transformer-based Language Models on ReRAM

Add code
Jan 22, 2024
Figure 1 for Zero-Space Cost Fault Tolerance for Transformer-based Language Models on ReRAM
Figure 2 for Zero-Space Cost Fault Tolerance for Transformer-based Language Models on ReRAM
Figure 3 for Zero-Space Cost Fault Tolerance for Transformer-based Language Models on ReRAM
Figure 4 for Zero-Space Cost Fault Tolerance for Transformer-based Language Models on ReRAM
Viaarxiv icon

Medusa: Simple LLM Inference Acceleration Framework with Multiple Decoding Heads

Add code
Jan 19, 2024
Viaarxiv icon

MaxK-GNN: Towards Theoretical Speed Limits for Accelerating Graph Neural Networks Training

Add code
Dec 18, 2023
Figure 1 for MaxK-GNN: Towards Theoretical Speed Limits for Accelerating Graph Neural Networks Training
Figure 2 for MaxK-GNN: Towards Theoretical Speed Limits for Accelerating Graph Neural Networks Training
Figure 3 for MaxK-GNN: Towards Theoretical Speed Limits for Accelerating Graph Neural Networks Training
Figure 4 for MaxK-GNN: Towards Theoretical Speed Limits for Accelerating Graph Neural Networks Training
Viaarxiv icon

Advanced Language Model-Driven Verilog Development: Enhancing Power, Performance, and Area Optimization in Code Synthesis

Add code
Dec 02, 2023
Figure 1 for Advanced Language Model-Driven Verilog Development: Enhancing Power, Performance, and Area Optimization in Code Synthesis
Figure 2 for Advanced Language Model-Driven Verilog Development: Enhancing Power, Performance, and Area Optimization in Code Synthesis
Figure 3 for Advanced Language Model-Driven Verilog Development: Enhancing Power, Performance, and Area Optimization in Code Synthesis
Figure 4 for Advanced Language Model-Driven Verilog Development: Enhancing Power, Performance, and Area Optimization in Code Synthesis
Viaarxiv icon

Evaluating Emerging AI/ML Accelerators: IPU, RDU, and NVIDIA/AMD GPUs

Add code
Nov 08, 2023
Figure 1 for Evaluating Emerging AI/ML Accelerators: IPU, RDU, and NVIDIA/AMD GPUs
Figure 2 for Evaluating Emerging AI/ML Accelerators: IPU, RDU, and NVIDIA/AMD GPUs
Figure 3 for Evaluating Emerging AI/ML Accelerators: IPU, RDU, and NVIDIA/AMD GPUs
Figure 4 for Evaluating Emerging AI/ML Accelerators: IPU, RDU, and NVIDIA/AMD GPUs
Viaarxiv icon

LinGCN: Structural Linearized Graph Convolutional Network for Homomorphically Encrypted Inference

Add code
Sep 30, 2023
Figure 1 for LinGCN: Structural Linearized Graph Convolutional Network for Homomorphically Encrypted Inference
Figure 2 for LinGCN: Structural Linearized Graph Convolutional Network for Homomorphically Encrypted Inference
Figure 3 for LinGCN: Structural Linearized Graph Convolutional Network for Homomorphically Encrypted Inference
Figure 4 for LinGCN: Structural Linearized Graph Convolutional Network for Homomorphically Encrypted Inference
Viaarxiv icon

Accel-GCN: High-Performance GPU Accelerator Design for Graph Convolution Networks

Add code
Aug 22, 2023
Viaarxiv icon