Alert button
Picture for Onur Mutlu

Onur Mutlu

Alert button

Analysis of Distributed Optimization Algorithms on a Real Processing-In-Memory System

Add code
Bookmark button
Alert button
Apr 10, 2024
Steve Rhyner, Haocong Luo, Juan Gómez-Luna, Mohammad Sadrosadati, Jiawei Jiang, Ataberk Olgun, Harshita Gupta, Ce Zhang, Onur Mutlu

Viaarxiv icon

Accelerating Graph Neural Networks on Real Processing-In-Memory Systems

Add code
Bookmark button
Alert button
Feb 26, 2024
Christina Giannoula, Peiming Yang, Ivan Fernandez Vega, Jiacheng Yang, Yu Xin Li, Juan Gomez Luna, Mohammad Sadrosadati, Onur Mutlu, Gennady Pekhimenko

Viaarxiv icon

Topologies of Reasoning: Demystifying Chains, Trees, and Graphs of Thoughts

Add code
Bookmark button
Alert button
Jan 25, 2024
Maciej Besta, Florim Memedi, Zhenyu Zhang, Robert Gerstenberger, Nils Blach, Piotr Nyczyk, Marcin Copik, Grzegorz Kwaśniewski, Jürgen Müller, Lukas Gianinazzi, Ales Kubicek, Hubert Niewiadomski, Onur Mutlu, Torsten Hoefler

Viaarxiv icon

TransPimLib: A Library for Efficient Transcendental Functions on Processing-in-Memory Systems

Add code
Bookmark button
Alert button
Apr 23, 2023
Maurus Item, Juan Gómez-Luna, Yuxin Guo, Geraldo F. Oliveira, Mohammad Sadrosadati, Onur Mutlu

Figure 1 for TransPimLib: A Library for Efficient Transcendental Functions on Processing-in-Memory Systems
Figure 2 for TransPimLib: A Library for Efficient Transcendental Functions on Processing-in-Memory Systems
Figure 3 for TransPimLib: A Library for Efficient Transcendental Functions on Processing-in-Memory Systems
Figure 4 for TransPimLib: A Library for Efficient Transcendental Functions on Processing-in-Memory Systems
Viaarxiv icon

RedBit: An End-to-End Flexible Framework for Evaluating the Accuracy of Quantized CNNs

Add code
Bookmark button
Alert button
Jan 15, 2023
André Santos, João Dinis Ferreira, Onur Mutlu, Gabriel Falcao

Figure 1 for RedBit: An End-to-End Flexible Framework for Evaluating the Accuracy of Quantized CNNs
Figure 2 for RedBit: An End-to-End Flexible Framework for Evaluating the Accuracy of Quantized CNNs
Figure 3 for RedBit: An End-to-End Flexible Framework for Evaluating the Accuracy of Quantized CNNs
Figure 4 for RedBit: An End-to-End Flexible Framework for Evaluating the Accuracy of Quantized CNNs
Viaarxiv icon

TargetCall: Eliminating the Wasted Computation in Basecalling via Pre-Basecalling Filtering

Add code
Bookmark button
Alert button
Dec 09, 2022
Meryem Banu Cavlak, Gagandeep Singh, Mohammed Alser, Can Firtina, Joël Lindegger, Mohammad Sadrosadati, Nika Mansouri Ghiasi, Can Alkan, Onur Mutlu

Figure 1 for TargetCall: Eliminating the Wasted Computation in Basecalling via Pre-Basecalling Filtering
Figure 2 for TargetCall: Eliminating the Wasted Computation in Basecalling via Pre-Basecalling Filtering
Figure 3 for TargetCall: Eliminating the Wasted Computation in Basecalling via Pre-Basecalling Filtering
Figure 4 for TargetCall: Eliminating the Wasted Computation in Basecalling via Pre-Basecalling Filtering
Viaarxiv icon

NEON: Enabling Efficient Support for Nonlinear Operations in Resistive RAM-based Neural Network Accelerators

Add code
Bookmark button
Alert button
Nov 10, 2022
Aditya Manglik, Minesh Patel, Haiyu Mao, Behzad Salami, Jisung Park, Lois Orosa, Onur Mutlu

Figure 1 for NEON: Enabling Efficient Support for Nonlinear Operations in Resistive RAM-based Neural Network Accelerators
Figure 2 for NEON: Enabling Efficient Support for Nonlinear Operations in Resistive RAM-based Neural Network Accelerators
Figure 3 for NEON: Enabling Efficient Support for Nonlinear Operations in Resistive RAM-based Neural Network Accelerators
Figure 4 for NEON: Enabling Efficient Support for Nonlinear Operations in Resistive RAM-based Neural Network Accelerators
Viaarxiv icon

Accelerating Neural Network Inference with Processing-in-DRAM: From the Edge to the Cloud

Add code
Bookmark button
Alert button
Sep 19, 2022
Geraldo F. Oliveira, Juan Gómez-Luna, Saugata Ghose, Amirali Boroumand, Onur Mutlu

Figure 1 for Accelerating Neural Network Inference with Processing-in-DRAM: From the Edge to the Cloud
Figure 2 for Accelerating Neural Network Inference with Processing-in-DRAM: From the Edge to the Cloud
Figure 3 for Accelerating Neural Network Inference with Processing-in-DRAM: From the Edge to the Cloud
Figure 4 for Accelerating Neural Network Inference with Processing-in-DRAM: From the Edge to the Cloud
Viaarxiv icon