Alert button
Picture for Arda Sahiner

Arda Sahiner

Alert button

GLEAM: Greedy Learning for Large-Scale Accelerated MRI Reconstruction

Add code
Bookmark button
Alert button
Jul 18, 2022
Batu Ozturkler, Arda Sahiner, Tolga Ergen, Arjun D Desai, Christopher M Sandino, Shreyas Vasanawala, John M Pauly, Morteza Mardani, Mert Pilanci

Figure 1 for GLEAM: Greedy Learning for Large-Scale Accelerated MRI Reconstruction
Figure 2 for GLEAM: Greedy Learning for Large-Scale Accelerated MRI Reconstruction
Figure 3 for GLEAM: Greedy Learning for Large-Scale Accelerated MRI Reconstruction
Figure 4 for GLEAM: Greedy Learning for Large-Scale Accelerated MRI Reconstruction
Viaarxiv icon

Unraveling Attention via Convex Duality: Analysis and Interpretations of Vision Transformers

Add code
Bookmark button
Alert button
May 20, 2022
Arda Sahiner, Tolga Ergen, Batu Ozturkler, John Pauly, Morteza Mardani, Mert Pilanci

Figure 1 for Unraveling Attention via Convex Duality: Analysis and Interpretations of Vision Transformers
Figure 2 for Unraveling Attention via Convex Duality: Analysis and Interpretations of Vision Transformers
Figure 3 for Unraveling Attention via Convex Duality: Analysis and Interpretations of Vision Transformers
Figure 4 for Unraveling Attention via Convex Duality: Analysis and Interpretations of Vision Transformers
Viaarxiv icon

Scale-Equivariant Unrolled Neural Networks for Data-Efficient Accelerated MRI Reconstruction

Add code
Bookmark button
Alert button
Apr 21, 2022
Beliz Gunel, Arda Sahiner, Arjun D. Desai, Akshay S. Chaudhari, Shreyas Vasanawala, Mert Pilanci, John Pauly

Figure 1 for Scale-Equivariant Unrolled Neural Networks for Data-Efficient Accelerated MRI Reconstruction
Figure 2 for Scale-Equivariant Unrolled Neural Networks for Data-Efficient Accelerated MRI Reconstruction
Figure 3 for Scale-Equivariant Unrolled Neural Networks for Data-Efficient Accelerated MRI Reconstruction
Figure 4 for Scale-Equivariant Unrolled Neural Networks for Data-Efficient Accelerated MRI Reconstruction
Viaarxiv icon

Fast Convex Optimization for Two-Layer ReLU Networks: Equivalent Model Classes and Cone Decompositions

Add code
Bookmark button
Alert button
Feb 05, 2022
Aaron Mishkin, Arda Sahiner, Mert Pilanci

Figure 1 for Fast Convex Optimization for Two-Layer ReLU Networks: Equivalent Model Classes and Cone Decompositions
Figure 2 for Fast Convex Optimization for Two-Layer ReLU Networks: Equivalent Model Classes and Cone Decompositions
Figure 3 for Fast Convex Optimization for Two-Layer ReLU Networks: Equivalent Model Classes and Cone Decompositions
Figure 4 for Fast Convex Optimization for Two-Layer ReLU Networks: Equivalent Model Classes and Cone Decompositions
Viaarxiv icon

Hidden Convexity of Wasserstein GANs: Interpretable Generative Models with Closed-Form Solutions

Add code
Bookmark button
Alert button
Jul 12, 2021
Arda Sahiner, Tolga Ergen, Batu Ozturkler, Burak Bartan, John Pauly, Morteza Mardani, Mert Pilanci

Figure 1 for Hidden Convexity of Wasserstein GANs: Interpretable Generative Models with Closed-Form Solutions
Figure 2 for Hidden Convexity of Wasserstein GANs: Interpretable Generative Models with Closed-Form Solutions
Figure 3 for Hidden Convexity of Wasserstein GANs: Interpretable Generative Models with Closed-Form Solutions
Figure 4 for Hidden Convexity of Wasserstein GANs: Interpretable Generative Models with Closed-Form Solutions
Viaarxiv icon

Demystifying Batch Normalization in ReLU Networks: Equivalent Convex Optimization Models and Implicit Regularization

Add code
Bookmark button
Alert button
Mar 02, 2021
Tolga Ergen, Arda Sahiner, Batu Ozturkler, John Pauly, Morteza Mardani, Mert Pilanci

Figure 1 for Demystifying Batch Normalization in ReLU Networks: Equivalent Convex Optimization Models and Implicit Regularization
Figure 2 for Demystifying Batch Normalization in ReLU Networks: Equivalent Convex Optimization Models and Implicit Regularization
Figure 3 for Demystifying Batch Normalization in ReLU Networks: Equivalent Convex Optimization Models and Implicit Regularization
Figure 4 for Demystifying Batch Normalization in ReLU Networks: Equivalent Convex Optimization Models and Implicit Regularization
Viaarxiv icon

Vector-output ReLU Neural Network Problems are Copositive Programs: Convex Analysis of Two Layer Networks and Polynomial-time Algorithms

Add code
Bookmark button
Alert button
Dec 24, 2020
Arda Sahiner, Tolga Ergen, John Pauly, Mert Pilanci

Figure 1 for Vector-output ReLU Neural Network Problems are Copositive Programs: Convex Analysis of Two Layer Networks and Polynomial-time Algorithms
Figure 2 for Vector-output ReLU Neural Network Problems are Copositive Programs: Convex Analysis of Two Layer Networks and Polynomial-time Algorithms
Figure 3 for Vector-output ReLU Neural Network Problems are Copositive Programs: Convex Analysis of Two Layer Networks and Polynomial-time Algorithms
Figure 4 for Vector-output ReLU Neural Network Problems are Copositive Programs: Convex Analysis of Two Layer Networks and Polynomial-time Algorithms
Viaarxiv icon

Convex Regularization Behind Neural Reconstruction

Add code
Bookmark button
Alert button
Dec 09, 2020
Arda Sahiner, Morteza Mardani, Batu Ozturkler, Mert Pilanci, John Pauly

Figure 1 for Convex Regularization Behind Neural Reconstruction
Figure 2 for Convex Regularization Behind Neural Reconstruction
Figure 3 for Convex Regularization Behind Neural Reconstruction
Figure 4 for Convex Regularization Behind Neural Reconstruction
Viaarxiv icon