Picture for Tolga Ergen

Tolga Ergen

MASSW: A New Dataset and Benchmark Tasks for AI-Assisted Scientific Workflows

Add code
Jun 10, 2024
Figure 1 for MASSW: A New Dataset and Benchmark Tasks for AI-Assisted Scientific Workflows
Figure 2 for MASSW: A New Dataset and Benchmark Tasks for AI-Assisted Scientific Workflows
Figure 3 for MASSW: A New Dataset and Benchmark Tasks for AI-Assisted Scientific Workflows
Figure 4 for MASSW: A New Dataset and Benchmark Tasks for AI-Assisted Scientific Workflows
Viaarxiv icon

A Library of Mirrors: Deep Neural Nets in Low Dimensions are Convex Lasso Models with Reflection Features

Add code
Mar 02, 2024
Figure 1 for A Library of Mirrors: Deep Neural Nets in Low Dimensions are Convex Lasso Models with Reflection Features
Figure 2 for A Library of Mirrors: Deep Neural Nets in Low Dimensions are Convex Lasso Models with Reflection Features
Figure 3 for A Library of Mirrors: Deep Neural Nets in Low Dimensions are Convex Lasso Models with Reflection Features
Figure 4 for A Library of Mirrors: Deep Neural Nets in Low Dimensions are Convex Lasso Models with Reflection Features
Viaarxiv icon

The Convex Landscape of Neural Networks: Characterizing Global Optima and Stationary Points via Lasso Models

Add code
Dec 19, 2023
Viaarxiv icon

Fixing the NTK: From Neural Network Linearizations to Exact Convex Programs

Add code
Sep 26, 2023
Figure 1 for Fixing the NTK: From Neural Network Linearizations to Exact Convex Programs
Figure 2 for Fixing the NTK: From Neural Network Linearizations to Exact Convex Programs
Figure 3 for Fixing the NTK: From Neural Network Linearizations to Exact Convex Programs
Viaarxiv icon

Globally Optimal Training of Neural Networks with Threshold Activation Functions

Add code
Mar 06, 2023
Figure 1 for Globally Optimal Training of Neural Networks with Threshold Activation Functions
Figure 2 for Globally Optimal Training of Neural Networks with Threshold Activation Functions
Figure 3 for Globally Optimal Training of Neural Networks with Threshold Activation Functions
Figure 4 for Globally Optimal Training of Neural Networks with Threshold Activation Functions
Viaarxiv icon

Convexifying Transformers: Improving optimization and understanding of transformer networks

Add code
Nov 20, 2022
Figure 1 for Convexifying Transformers: Improving optimization and understanding of transformer networks
Figure 2 for Convexifying Transformers: Improving optimization and understanding of transformer networks
Figure 3 for Convexifying Transformers: Improving optimization and understanding of transformer networks
Figure 4 for Convexifying Transformers: Improving optimization and understanding of transformer networks
Viaarxiv icon

GLEAM: Greedy Learning for Large-Scale Accelerated MRI Reconstruction

Add code
Jul 18, 2022
Figure 1 for GLEAM: Greedy Learning for Large-Scale Accelerated MRI Reconstruction
Figure 2 for GLEAM: Greedy Learning for Large-Scale Accelerated MRI Reconstruction
Figure 3 for GLEAM: Greedy Learning for Large-Scale Accelerated MRI Reconstruction
Figure 4 for GLEAM: Greedy Learning for Large-Scale Accelerated MRI Reconstruction
Viaarxiv icon

Unraveling Attention via Convex Duality: Analysis and Interpretations of Vision Transformers

Add code
May 20, 2022
Figure 1 for Unraveling Attention via Convex Duality: Analysis and Interpretations of Vision Transformers
Figure 2 for Unraveling Attention via Convex Duality: Analysis and Interpretations of Vision Transformers
Figure 3 for Unraveling Attention via Convex Duality: Analysis and Interpretations of Vision Transformers
Figure 4 for Unraveling Attention via Convex Duality: Analysis and Interpretations of Vision Transformers
Viaarxiv icon

Path Regularization: A Convexity and Sparsity Inducing Regularization for Parallel ReLU Networks

Add code
Oct 25, 2021
Figure 1 for Path Regularization: A Convexity and Sparsity Inducing Regularization for Parallel ReLU Networks
Figure 2 for Path Regularization: A Convexity and Sparsity Inducing Regularization for Parallel ReLU Networks
Figure 3 for Path Regularization: A Convexity and Sparsity Inducing Regularization for Parallel ReLU Networks
Figure 4 for Path Regularization: A Convexity and Sparsity Inducing Regularization for Parallel ReLU Networks
Viaarxiv icon

Parallel Deep Neural Networks Have Zero Duality Gap

Add code
Oct 13, 2021
Figure 1 for Parallel Deep Neural Networks Have Zero Duality Gap
Figure 2 for Parallel Deep Neural Networks Have Zero Duality Gap
Figure 3 for Parallel Deep Neural Networks Have Zero Duality Gap
Viaarxiv icon