Picture for Cong Guo

Cong Guo

A novel feature selection framework for incomplete data

Dec 07, 2023
Viaarxiv icon

Iterative missing value imputation based on feature importance

Add code
Nov 14, 2023
Viaarxiv icon

Accelerating Generic Graph Neural Networks via Architecture, Compiler, Partition Method Co-Design

Aug 16, 2023
Figure 1 for Accelerating Generic Graph Neural Networks via Architecture, Compiler, Partition Method Co-Design
Figure 2 for Accelerating Generic Graph Neural Networks via Architecture, Compiler, Partition Method Co-Design
Figure 3 for Accelerating Generic Graph Neural Networks via Architecture, Compiler, Partition Method Co-Design
Figure 4 for Accelerating Generic Graph Neural Networks via Architecture, Compiler, Partition Method Co-Design
Viaarxiv icon

AdaptGear: Accelerating GNN Training via Adaptive Subgraph-Level Kernels on GPUs

May 27, 2023
Figure 1 for AdaptGear: Accelerating GNN Training via Adaptive Subgraph-Level Kernels on GPUs
Figure 2 for AdaptGear: Accelerating GNN Training via Adaptive Subgraph-Level Kernels on GPUs
Figure 3 for AdaptGear: Accelerating GNN Training via Adaptive Subgraph-Level Kernels on GPUs
Figure 4 for AdaptGear: Accelerating GNN Training via Adaptive Subgraph-Level Kernels on GPUs
Viaarxiv icon

VDD: Varied Drone Dataset for Semantic Segmentation

May 23, 2023
Figure 1 for VDD: Varied Drone Dataset for Semantic Segmentation
Figure 2 for VDD: Varied Drone Dataset for Semantic Segmentation
Figure 3 for VDD: Varied Drone Dataset for Semantic Segmentation
Figure 4 for VDD: Varied Drone Dataset for Semantic Segmentation
Viaarxiv icon

Nesting Forward Automatic Differentiation for Memory-Efficient Deep Neural Network Training

Sep 22, 2022
Figure 1 for Nesting Forward Automatic Differentiation for Memory-Efficient Deep Neural Network Training
Figure 2 for Nesting Forward Automatic Differentiation for Memory-Efficient Deep Neural Network Training
Figure 3 for Nesting Forward Automatic Differentiation for Memory-Efficient Deep Neural Network Training
Figure 4 for Nesting Forward Automatic Differentiation for Memory-Efficient Deep Neural Network Training
Viaarxiv icon

ANT: Exploiting Adaptive Numerical Data Type for Low-bit Deep Neural Network Quantization

Add code
Aug 30, 2022
Figure 1 for ANT: Exploiting Adaptive Numerical Data Type for Low-bit Deep Neural Network Quantization
Figure 2 for ANT: Exploiting Adaptive Numerical Data Type for Low-bit Deep Neural Network Quantization
Figure 3 for ANT: Exploiting Adaptive Numerical Data Type for Low-bit Deep Neural Network Quantization
Figure 4 for ANT: Exploiting Adaptive Numerical Data Type for Low-bit Deep Neural Network Quantization
Viaarxiv icon

Efficient Activation Quantization via Adaptive Rounding Border for Post-Training Quantization

Add code
Aug 25, 2022
Figure 1 for Efficient Activation Quantization via Adaptive Rounding Border for Post-Training Quantization
Figure 2 for Efficient Activation Quantization via Adaptive Rounding Border for Post-Training Quantization
Figure 3 for Efficient Activation Quantization via Adaptive Rounding Border for Post-Training Quantization
Figure 4 for Efficient Activation Quantization via Adaptive Rounding Border for Post-Training Quantization
Viaarxiv icon

SQuant: On-the-Fly Data-Free Quantization via Diagonal Hessian Approximation

Add code
Feb 14, 2022
Figure 1 for SQuant: On-the-Fly Data-Free Quantization via Diagonal Hessian Approximation
Figure 2 for SQuant: On-the-Fly Data-Free Quantization via Diagonal Hessian Approximation
Figure 3 for SQuant: On-the-Fly Data-Free Quantization via Diagonal Hessian Approximation
Figure 4 for SQuant: On-the-Fly Data-Free Quantization via Diagonal Hessian Approximation
Viaarxiv icon

Dual-side Sparse Tensor Core

May 20, 2021
Figure 1 for Dual-side Sparse Tensor Core
Figure 2 for Dual-side Sparse Tensor Core
Figure 3 for Dual-side Sparse Tensor Core
Figure 4 for Dual-side Sparse Tensor Core
Viaarxiv icon