Picture for Jae-sun Seo

Jae-sun Seo

Hybrid In-memory Computing Architecture for the Training of Deep Neural Networks

Add code
Feb 10, 2021
Figure 1 for Hybrid In-memory Computing Architecture for the Training of Deep Neural Networks
Figure 2 for Hybrid In-memory Computing Architecture for the Training of Deep Neural Networks
Figure 3 for Hybrid In-memory Computing Architecture for the Training of Deep Neural Networks
Figure 4 for Hybrid In-memory Computing Architecture for the Training of Deep Neural Networks
Viaarxiv icon

Benchmarking TinyML Systems: Challenges and Direction

Add code
Mar 10, 2020
Figure 1 for Benchmarking TinyML Systems: Challenges and Direction
Figure 2 for Benchmarking TinyML Systems: Challenges and Direction
Figure 3 for Benchmarking TinyML Systems: Challenges and Direction
Viaarxiv icon

High-Throughput In-Memory Computing for Binary Deep Neural Networks with Monolithically Integrated RRAM and 90nm CMOS

Add code
Sep 16, 2019
Figure 1 for High-Throughput In-Memory Computing for Binary Deep Neural Networks with Monolithically Integrated RRAM and 90nm CMOS
Figure 2 for High-Throughput In-Memory Computing for Binary Deep Neural Networks with Monolithically Integrated RRAM and 90nm CMOS
Figure 3 for High-Throughput In-Memory Computing for Binary Deep Neural Networks with Monolithically Integrated RRAM and 90nm CMOS
Figure 4 for High-Throughput In-Memory Computing for Binary Deep Neural Networks with Monolithically Integrated RRAM and 90nm CMOS
Viaarxiv icon

Automatic Compiler Based FPGA Accelerator for CNN Training

Add code
Aug 15, 2019
Figure 1 for Automatic Compiler Based FPGA Accelerator for CNN Training
Figure 2 for Automatic Compiler Based FPGA Accelerator for CNN Training
Figure 3 for Automatic Compiler Based FPGA Accelerator for CNN Training
Figure 4 for Automatic Compiler Based FPGA Accelerator for CNN Training
Viaarxiv icon

FixyNN: Efficient Hardware for Mobile Computer Vision via Transfer Learning

Add code
Feb 27, 2019
Figure 1 for FixyNN: Efficient Hardware for Mobile Computer Vision via Transfer Learning
Figure 2 for FixyNN: Efficient Hardware for Mobile Computer Vision via Transfer Learning
Figure 3 for FixyNN: Efficient Hardware for Mobile Computer Vision via Transfer Learning
Figure 4 for FixyNN: Efficient Hardware for Mobile Computer Vision via Transfer Learning
Viaarxiv icon

Large-Scale Neuromorphic Spiking Array Processors: A quest to mimic the brain

Add code
May 23, 2018
Figure 1 for Large-Scale Neuromorphic Spiking Array Processors: A quest to mimic the brain
Figure 2 for Large-Scale Neuromorphic Spiking Array Processors: A quest to mimic the brain
Figure 3 for Large-Scale Neuromorphic Spiking Array Processors: A quest to mimic the brain
Figure 4 for Large-Scale Neuromorphic Spiking Array Processors: A quest to mimic the brain
Viaarxiv icon

Minimizing Area and Energy of Deep Learning Hardware Design Using Collective Low Precision and Structured Compression

Add code
Apr 19, 2018
Figure 1 for Minimizing Area and Energy of Deep Learning Hardware Design Using Collective Low Precision and Structured Compression
Figure 2 for Minimizing Area and Energy of Deep Learning Hardware Design Using Collective Low Precision and Structured Compression
Figure 3 for Minimizing Area and Energy of Deep Learning Hardware Design Using Collective Low Precision and Structured Compression
Figure 4 for Minimizing Area and Energy of Deep Learning Hardware Design Using Collective Low Precision and Structured Compression
Viaarxiv icon

Algorithm and Hardware Design of Discrete-Time Spiking Neural Networks Based on Back Propagation with Binary Activations

Add code
Sep 19, 2017
Figure 1 for Algorithm and Hardware Design of Discrete-Time Spiking Neural Networks Based on Back Propagation with Binary Activations
Figure 2 for Algorithm and Hardware Design of Discrete-Time Spiking Neural Networks Based on Back Propagation with Binary Activations
Figure 3 for Algorithm and Hardware Design of Discrete-Time Spiking Neural Networks Based on Back Propagation with Binary Activations
Figure 4 for Algorithm and Hardware Design of Discrete-Time Spiking Neural Networks Based on Back Propagation with Binary Activations
Viaarxiv icon

Comprehensive Evaluation of OpenCL-based Convolutional Neural Network Accelerators in Xilinx and Altera FPGAs

Add code
Sep 29, 2016
Figure 1 for Comprehensive Evaluation of OpenCL-based Convolutional Neural Network Accelerators in Xilinx and Altera FPGAs
Figure 2 for Comprehensive Evaluation of OpenCL-based Convolutional Neural Network Accelerators in Xilinx and Altera FPGAs
Figure 3 for Comprehensive Evaluation of OpenCL-based Convolutional Neural Network Accelerators in Xilinx and Altera FPGAs
Figure 4 for Comprehensive Evaluation of OpenCL-based Convolutional Neural Network Accelerators in Xilinx and Altera FPGAs
Viaarxiv icon

Reducing the Model Order of Deep Neural Networks Using Information Theory

Add code
May 16, 2016
Figure 1 for Reducing the Model Order of Deep Neural Networks Using Information Theory
Figure 2 for Reducing the Model Order of Deep Neural Networks Using Information Theory
Figure 3 for Reducing the Model Order of Deep Neural Networks Using Information Theory
Figure 4 for Reducing the Model Order of Deep Neural Networks Using Information Theory
Viaarxiv icon