Picture for Jiannan Tian

Jiannan Tian

Summer

Accelerating Communication in Deep Learning Recommendation Model Training with Dual-Level Adaptive Lossy Compression

Add code
Jul 05, 2024
Viaarxiv icon

HEAT: A Highly Efficient and Affordable Training System for Collaborative Filtering Based Recommendation on CPUs

Add code
May 03, 2023
Viaarxiv icon

SOLAR: A Highly Optimized Data Loading Framework for Distributed Training of CNN-based Scientific Surrogates

Add code
Nov 04, 2022
Viaarxiv icon

H-GCN: A Graph Convolutional Network Accelerator on Versal ACAP Architecture

Add code
Jun 28, 2022
Figure 1 for H-GCN: A Graph Convolutional Network Accelerator on Versal ACAP Architecture
Figure 2 for H-GCN: A Graph Convolutional Network Accelerator on Versal ACAP Architecture
Figure 3 for H-GCN: A Graph Convolutional Network Accelerator on Versal ACAP Architecture
Figure 4 for H-GCN: A Graph Convolutional Network Accelerator on Versal ACAP Architecture
Viaarxiv icon

An Efficient End-to-End Deep Learning Training Framework via Fine-Grained Pattern-Based Pruning

Add code
Nov 20, 2020
Figure 1 for An Efficient End-to-End Deep Learning Training Framework via Fine-Grained Pattern-Based Pruning
Figure 2 for An Efficient End-to-End Deep Learning Training Framework via Fine-Grained Pattern-Based Pruning
Figure 3 for An Efficient End-to-End Deep Learning Training Framework via Fine-Grained Pattern-Based Pruning
Figure 4 for An Efficient End-to-End Deep Learning Training Framework via Fine-Grained Pattern-Based Pruning
Viaarxiv icon

DeepSZ: A Novel Framework to Compress Deep Neural Networks by Using Error-Bounded Lossy Compression

Add code
Jan 26, 2019
Figure 1 for DeepSZ: A Novel Framework to Compress Deep Neural Networks by Using Error-Bounded Lossy Compression
Figure 2 for DeepSZ: A Novel Framework to Compress Deep Neural Networks by Using Error-Bounded Lossy Compression
Figure 3 for DeepSZ: A Novel Framework to Compress Deep Neural Networks by Using Error-Bounded Lossy Compression
Figure 4 for DeepSZ: A Novel Framework to Compress Deep Neural Networks by Using Error-Bounded Lossy Compression
Viaarxiv icon