Picture for Jae-sun Seo

Jae-sun Seo

Torch2Chip: An End-to-end Customizable Deep Neural Network Compression and Deployment Toolkit for Prototype Hardware Accelerator Design

Add code
May 06, 2024
Figure 1 for Torch2Chip: An End-to-end Customizable Deep Neural Network Compression and Deployment Toolkit for Prototype Hardware Accelerator Design
Figure 2 for Torch2Chip: An End-to-end Customizable Deep Neural Network Compression and Deployment Toolkit for Prototype Hardware Accelerator Design
Figure 3 for Torch2Chip: An End-to-end Customizable Deep Neural Network Compression and Deployment Toolkit for Prototype Hardware Accelerator Design
Figure 4 for Torch2Chip: An End-to-end Customizable Deep Neural Network Compression and Deployment Toolkit for Prototype Hardware Accelerator Design
Viaarxiv icon

Transformer-based Selective Super-Resolution for Efficient Image Refinement

Add code
Dec 10, 2023
Figure 1 for Transformer-based Selective Super-Resolution for Efficient Image Refinement
Figure 2 for Transformer-based Selective Super-Resolution for Efficient Image Refinement
Figure 3 for Transformer-based Selective Super-Resolution for Efficient Image Refinement
Figure 4 for Transformer-based Selective Super-Resolution for Efficient Image Refinement
Viaarxiv icon

NeuroBench: Advancing Neuromorphic Computing through Collaborative, Fair and Representative Benchmarking

Add code
Apr 15, 2023
Figure 1 for NeuroBench: Advancing Neuromorphic Computing through Collaborative, Fair and Representative Benchmarking
Viaarxiv icon

SIAM: Chiplet-based Scalable In-Memory Acceleration with Mesh for Deep Neural Networks

Add code
Aug 14, 2021
Figure 1 for SIAM: Chiplet-based Scalable In-Memory Acceleration with Mesh for Deep Neural Networks
Figure 2 for SIAM: Chiplet-based Scalable In-Memory Acceleration with Mesh for Deep Neural Networks
Figure 3 for SIAM: Chiplet-based Scalable In-Memory Acceleration with Mesh for Deep Neural Networks
Figure 4 for SIAM: Chiplet-based Scalable In-Memory Acceleration with Mesh for Deep Neural Networks
Viaarxiv icon

Impact of On-Chip Interconnect on In-Memory Acceleration of Deep Neural Networks

Add code
Jul 06, 2021
Figure 1 for Impact of On-Chip Interconnect on In-Memory Acceleration of Deep Neural Networks
Figure 2 for Impact of On-Chip Interconnect on In-Memory Acceleration of Deep Neural Networks
Figure 3 for Impact of On-Chip Interconnect on In-Memory Acceleration of Deep Neural Networks
Figure 4 for Impact of On-Chip Interconnect on In-Memory Acceleration of Deep Neural Networks
Viaarxiv icon

RA-BNN: Constructing Robust & Accurate Binary Neural Network to Simultaneously Defend Adversarial Bit-Flip Attack and Improve Accuracy

Add code
Mar 22, 2021
Figure 1 for RA-BNN: Constructing Robust & Accurate Binary Neural Network to Simultaneously Defend Adversarial Bit-Flip Attack and Improve Accuracy
Figure 2 for RA-BNN: Constructing Robust & Accurate Binary Neural Network to Simultaneously Defend Adversarial Bit-Flip Attack and Improve Accuracy
Figure 3 for RA-BNN: Constructing Robust & Accurate Binary Neural Network to Simultaneously Defend Adversarial Bit-Flip Attack and Improve Accuracy
Figure 4 for RA-BNN: Constructing Robust & Accurate Binary Neural Network to Simultaneously Defend Adversarial Bit-Flip Attack and Improve Accuracy
Viaarxiv icon

Hybrid In-memory Computing Architecture for the Training of Deep Neural Networks

Add code
Feb 10, 2021
Figure 1 for Hybrid In-memory Computing Architecture for the Training of Deep Neural Networks
Figure 2 for Hybrid In-memory Computing Architecture for the Training of Deep Neural Networks
Figure 3 for Hybrid In-memory Computing Architecture for the Training of Deep Neural Networks
Figure 4 for Hybrid In-memory Computing Architecture for the Training of Deep Neural Networks
Viaarxiv icon

Benchmarking TinyML Systems: Challenges and Direction

Add code
Mar 10, 2020
Figure 1 for Benchmarking TinyML Systems: Challenges and Direction
Figure 2 for Benchmarking TinyML Systems: Challenges and Direction
Figure 3 for Benchmarking TinyML Systems: Challenges and Direction
Viaarxiv icon

High-Throughput In-Memory Computing for Binary Deep Neural Networks with Monolithically Integrated RRAM and 90nm CMOS

Add code
Sep 16, 2019
Figure 1 for High-Throughput In-Memory Computing for Binary Deep Neural Networks with Monolithically Integrated RRAM and 90nm CMOS
Figure 2 for High-Throughput In-Memory Computing for Binary Deep Neural Networks with Monolithically Integrated RRAM and 90nm CMOS
Figure 3 for High-Throughput In-Memory Computing for Binary Deep Neural Networks with Monolithically Integrated RRAM and 90nm CMOS
Figure 4 for High-Throughput In-Memory Computing for Binary Deep Neural Networks with Monolithically Integrated RRAM and 90nm CMOS
Viaarxiv icon

Automatic Compiler Based FPGA Accelerator for CNN Training

Add code
Aug 15, 2019
Figure 1 for Automatic Compiler Based FPGA Accelerator for CNN Training
Figure 2 for Automatic Compiler Based FPGA Accelerator for CNN Training
Figure 3 for Automatic Compiler Based FPGA Accelerator for CNN Training
Figure 4 for Automatic Compiler Based FPGA Accelerator for CNN Training
Viaarxiv icon