Picture for Jinho Lee

Jinho Lee

Slice-and-Forge: Making Better Use of Caches for Graph Convolutional Network Accelerators

Add code
Jan 24, 2023
Figure 1 for Slice-and-Forge: Making Better Use of Caches for Graph Convolutional Network Accelerators
Figure 2 for Slice-and-Forge: Making Better Use of Caches for Graph Convolutional Network Accelerators
Figure 3 for Slice-and-Forge: Making Better Use of Caches for Graph Convolutional Network Accelerators
Figure 4 for Slice-and-Forge: Making Better Use of Caches for Graph Convolutional Network Accelerators
Viaarxiv icon

Optimus-CC: Efficient Large NLP Model Training with 3D Parallelism Aware Communication Compression

Add code
Jan 24, 2023
Figure 1 for Optimus-CC: Efficient Large NLP Model Training with 3D Parallelism Aware Communication Compression
Figure 2 for Optimus-CC: Efficient Large NLP Model Training with 3D Parallelism Aware Communication Compression
Figure 3 for Optimus-CC: Efficient Large NLP Model Training with 3D Parallelism Aware Communication Compression
Figure 4 for Optimus-CC: Efficient Large NLP Model Training with 3D Parallelism Aware Communication Compression
Viaarxiv icon

Enabling Hard Constraints in Differentiable Neural Network and Accelerator Co-Exploration

Add code
Jan 23, 2023
Viaarxiv icon

ETF Portfolio Construction via Neural Network trained on Financial Statement Data

Add code
Jul 04, 2022
Figure 1 for ETF Portfolio Construction via Neural Network trained on Financial Statement Data
Figure 2 for ETF Portfolio Construction via Neural Network trained on Financial Statement Data
Figure 3 for ETF Portfolio Construction via Neural Network trained on Financial Statement Data
Figure 4 for ETF Portfolio Construction via Neural Network trained on Financial Statement Data
Viaarxiv icon

Shai-am: A Machine Learning Platform for Investment Strategies

Add code
Jul 01, 2022
Figure 1 for Shai-am: A Machine Learning Platform for Investment Strategies
Figure 2 for Shai-am: A Machine Learning Platform for Investment Strategies
Viaarxiv icon

It's All In the Teacher: Zero-Shot Quantization Brought Closer to the Teacher

Add code
Apr 01, 2022
Figure 1 for It's All In the Teacher: Zero-Shot Quantization Brought Closer to the Teacher
Figure 2 for It's All In the Teacher: Zero-Shot Quantization Brought Closer to the Teacher
Figure 3 for It's All In the Teacher: Zero-Shot Quantization Brought Closer to the Teacher
Figure 4 for It's All In the Teacher: Zero-Shot Quantization Brought Closer to the Teacher
Viaarxiv icon

Qimera: Data-free Quantization with Synthetic Boundary Supporting Samples

Add code
Nov 04, 2021
Figure 1 for Qimera: Data-free Quantization with Synthetic Boundary Supporting Samples
Figure 2 for Qimera: Data-free Quantization with Synthetic Boundary Supporting Samples
Figure 3 for Qimera: Data-free Quantization with Synthetic Boundary Supporting Samples
Figure 4 for Qimera: Data-free Quantization with Synthetic Boundary Supporting Samples
Viaarxiv icon

An Attention Module for Convolutional Neural Networks

Add code
Aug 18, 2021
Figure 1 for An Attention Module for Convolutional Neural Networks
Figure 2 for An Attention Module for Convolutional Neural Networks
Figure 3 for An Attention Module for Convolutional Neural Networks
Figure 4 for An Attention Module for Convolutional Neural Networks
Viaarxiv icon

AutoReCon: Neural Architecture Search-based Reconstruction for Data-free Compression

Add code
May 25, 2021
Figure 1 for AutoReCon: Neural Architecture Search-based Reconstruction for Data-free Compression
Figure 2 for AutoReCon: Neural Architecture Search-based Reconstruction for Data-free Compression
Figure 3 for AutoReCon: Neural Architecture Search-based Reconstruction for Data-free Compression
Figure 4 for AutoReCon: Neural Architecture Search-based Reconstruction for Data-free Compression
Viaarxiv icon

GradPIM: A Practical Processing-in-DRAM Architecture for Gradient Descent

Add code
Feb 15, 2021
Figure 1 for GradPIM: A Practical Processing-in-DRAM Architecture for Gradient Descent
Figure 2 for GradPIM: A Practical Processing-in-DRAM Architecture for Gradient Descent
Figure 3 for GradPIM: A Practical Processing-in-DRAM Architecture for Gradient Descent
Figure 4 for GradPIM: A Practical Processing-in-DRAM Architecture for Gradient Descent
Viaarxiv icon