Alert button
Picture for Jae-sun Seo

Jae-sun Seo

Alert button

Transformer-based Selective Super-Resolution for Efficient Image Refinement

Dec 10, 2023
Tianyi Zhang, Kishore Kasichainula, Yaoxin Zhuo, Baoxin Li, Jae-sun Seo, Yu Cao

Viaarxiv icon

NeuroBench: Advancing Neuromorphic Computing through Collaborative, Fair and Representative Benchmarking

Apr 15, 2023
Jason Yik, Soikat Hasan Ahmed, Zergham Ahmed, Brian Anderson, Andreas G. Andreou, Chiara Bartolozzi, Arindam Basu, Douwe den Blanken, Petrut Bogdan, Sander Bohte, Younes Bouhadjar, Sonia Buckley, Gert Cauwenberghs, Federico Corradi, Guido de Croon, Andreea Danielescu, Anurag Daram, Mike Davies, Yigit Demirag, Jason Eshraghian, Jeremy Forest, Steve Furber, Michael Furlong, Aditya Gilra, Giacomo Indiveri, Siddharth Joshi, Vedant Karia, Lyes Khacef, James C. Knight, Laura Kriener, Rajkumar Kubendran, Dhireesha Kudithipudi, Gregor Lenz, Rajit Manohar, Christian Mayr, Konstantinos Michmizos, Dylan Muir, Emre Neftci, Thomas Nowotny, Fabrizio Ottati, Ayca Ozcelikkale, Noah Pacik-Nelson, Priyadarshini Panda, Sun Pao-Sheng, Melika Payvand, Christian Pehle, Mihai A. Petrovici, Christoph Posch, Alpha Renner, Yulia Sandamirskaya, Clemens JS Schaefer, André van Schaik, Johannes Schemmel, Catherine Schuman, Jae-sun Seo, Sadique Sheik, Sumit Bam Shrestha, Manolis Sifalakis, Amos Sironi, Kenneth Stewart, Terrence C. Stewart, Philipp Stratmann, Guangzhi Tang, Jonathan Timcheck, Marian Verhelst, Craig M. Vineyard, Bernhard Vogginger, Amirreza Yousefzadeh, Biyan Zhou, Fatima Tuz Zohora, Charlotte Frenkel, Vijay Janapa Reddi

Figure 1 for NeuroBench: Advancing Neuromorphic Computing through Collaborative, Fair and Representative Benchmarking
Viaarxiv icon

SIAM: Chiplet-based Scalable In-Memory Acceleration with Mesh for Deep Neural Networks

Aug 14, 2021
Gokul Krishnan, Sumit K. Mandal, Manvitha Pannala, Chaitali Chakrabarti, Jae-sun Seo, Umit Y. Ogras, Yu Cao

Figure 1 for SIAM: Chiplet-based Scalable In-Memory Acceleration with Mesh for Deep Neural Networks
Figure 2 for SIAM: Chiplet-based Scalable In-Memory Acceleration with Mesh for Deep Neural Networks
Figure 3 for SIAM: Chiplet-based Scalable In-Memory Acceleration with Mesh for Deep Neural Networks
Figure 4 for SIAM: Chiplet-based Scalable In-Memory Acceleration with Mesh for Deep Neural Networks
Viaarxiv icon

Impact of On-Chip Interconnect on In-Memory Acceleration of Deep Neural Networks

Jul 06, 2021
Gokul Krishnan, Sumit K. Mandal, Chaitali Chakrabarti, Jae-sun Seo, Umit Y. Ogras, Yu Cao

Figure 1 for Impact of On-Chip Interconnect on In-Memory Acceleration of Deep Neural Networks
Figure 2 for Impact of On-Chip Interconnect on In-Memory Acceleration of Deep Neural Networks
Figure 3 for Impact of On-Chip Interconnect on In-Memory Acceleration of Deep Neural Networks
Figure 4 for Impact of On-Chip Interconnect on In-Memory Acceleration of Deep Neural Networks
Viaarxiv icon

RA-BNN: Constructing Robust & Accurate Binary Neural Network to Simultaneously Defend Adversarial Bit-Flip Attack and Improve Accuracy

Mar 22, 2021
Adnan Siraj Rakin, Li Yang, Jingtao Li, Fan Yao, Chaitali Chakrabarti, Yu Cao, Jae-sun Seo, Deliang Fan

Figure 1 for RA-BNN: Constructing Robust & Accurate Binary Neural Network to Simultaneously Defend Adversarial Bit-Flip Attack and Improve Accuracy
Figure 2 for RA-BNN: Constructing Robust & Accurate Binary Neural Network to Simultaneously Defend Adversarial Bit-Flip Attack and Improve Accuracy
Figure 3 for RA-BNN: Constructing Robust & Accurate Binary Neural Network to Simultaneously Defend Adversarial Bit-Flip Attack and Improve Accuracy
Figure 4 for RA-BNN: Constructing Robust & Accurate Binary Neural Network to Simultaneously Defend Adversarial Bit-Flip Attack and Improve Accuracy
Viaarxiv icon

Hybrid In-memory Computing Architecture for the Training of Deep Neural Networks

Feb 10, 2021
Vinay Joshi, Wangxin He, Jae-sun Seo, Bipin Rajendran

Figure 1 for Hybrid In-memory Computing Architecture for the Training of Deep Neural Networks
Figure 2 for Hybrid In-memory Computing Architecture for the Training of Deep Neural Networks
Figure 3 for Hybrid In-memory Computing Architecture for the Training of Deep Neural Networks
Figure 4 for Hybrid In-memory Computing Architecture for the Training of Deep Neural Networks
Viaarxiv icon

Benchmarking TinyML Systems: Challenges and Direction

Mar 10, 2020
Colby R. Banbury, Vijay Janapa Reddi, Max Lam, William Fu, Amin Fazel, Jeremy Holleman, Xinyuan Huang, Robert Hurtado, David Kanter, Anton Lokhmotov, David Patterson, Danilo Pau, Jae-sun Seo, Jeff Sieracki, Urmish Thakker, Marian Verhelst, Poonam Yadav

Figure 1 for Benchmarking TinyML Systems: Challenges and Direction
Figure 2 for Benchmarking TinyML Systems: Challenges and Direction
Figure 3 for Benchmarking TinyML Systems: Challenges and Direction
Viaarxiv icon

High-Throughput In-Memory Computing for Binary Deep Neural Networks with Monolithically Integrated RRAM and 90nm CMOS

Sep 16, 2019
Shihui Yin, Xiaoyu Sun, Shimeng Yu, Jae-sun Seo

Figure 1 for High-Throughput In-Memory Computing for Binary Deep Neural Networks with Monolithically Integrated RRAM and 90nm CMOS
Figure 2 for High-Throughput In-Memory Computing for Binary Deep Neural Networks with Monolithically Integrated RRAM and 90nm CMOS
Figure 3 for High-Throughput In-Memory Computing for Binary Deep Neural Networks with Monolithically Integrated RRAM and 90nm CMOS
Figure 4 for High-Throughput In-Memory Computing for Binary Deep Neural Networks with Monolithically Integrated RRAM and 90nm CMOS
Viaarxiv icon

Automatic Compiler Based FPGA Accelerator for CNN Training

Aug 15, 2019
Shreyas Kolala Venkataramanaiah, Yufei Ma, Shihui Yin, Eriko Nurvithadhi, Aravind Dasu, Yu Cao, Jae-sun Seo

Figure 1 for Automatic Compiler Based FPGA Accelerator for CNN Training
Figure 2 for Automatic Compiler Based FPGA Accelerator for CNN Training
Figure 3 for Automatic Compiler Based FPGA Accelerator for CNN Training
Figure 4 for Automatic Compiler Based FPGA Accelerator for CNN Training
Viaarxiv icon