Picture for Fred Roosta

Fred Roosta

Non-PSD Matrix Sketching with Applications to Regression and Optimization

Add code
Jun 16, 2021
Figure 1 for Non-PSD Matrix Sketching with Applications to Regression and Optimization
Figure 2 for Non-PSD Matrix Sketching with Applications to Regression and Optimization
Figure 3 for Non-PSD Matrix Sketching with Applications to Regression and Optimization
Figure 4 for Non-PSD Matrix Sketching with Applications to Regression and Optimization
Viaarxiv icon

Average-reward model-free reinforcement learning: a systematic review and literature mapping

Add code
Oct 18, 2020
Figure 1 for Average-reward model-free reinforcement learning: a systematic review and literature mapping
Figure 2 for Average-reward model-free reinforcement learning: a systematic review and literature mapping
Figure 3 for Average-reward model-free reinforcement learning: a systematic review and literature mapping
Figure 4 for Average-reward model-free reinforcement learning: a systematic review and literature mapping
Viaarxiv icon

Stochastic Normalizing Flows

Add code
Feb 25, 2020
Figure 1 for Stochastic Normalizing Flows
Figure 2 for Stochastic Normalizing Flows
Figure 3 for Stochastic Normalizing Flows
Figure 4 for Stochastic Normalizing Flows
Viaarxiv icon

Avoiding Kernel Fixed Points: Computing with ELU and GELU Infinite Networks

Add code
Feb 22, 2020
Figure 1 for Avoiding Kernel Fixed Points: Computing with ELU and GELU Infinite Networks
Figure 2 for Avoiding Kernel Fixed Points: Computing with ELU and GELU Infinite Networks
Figure 3 for Avoiding Kernel Fixed Points: Computing with ELU and GELU Infinite Networks
Figure 4 for Avoiding Kernel Fixed Points: Computing with ELU and GELU Infinite Networks
Viaarxiv icon

The reproducing Stein kernel approach for post-hoc corrected sampling

Add code
Jan 25, 2020
Figure 1 for The reproducing Stein kernel approach for post-hoc corrected sampling
Figure 2 for The reproducing Stein kernel approach for post-hoc corrected sampling
Figure 3 for The reproducing Stein kernel approach for post-hoc corrected sampling
Viaarxiv icon

LSAR: Efficient Leverage Score Sampling Algorithm for the Analysis of Big Time Series Data

Add code
Dec 26, 2019
Figure 1 for LSAR: Efficient Leverage Score Sampling Algorithm for the Analysis of Big Time Series Data
Figure 2 for LSAR: Efficient Leverage Score Sampling Algorithm for the Analysis of Big Time Series Data
Figure 3 for LSAR: Efficient Leverage Score Sampling Algorithm for the Analysis of Big Time Series Data
Figure 4 for LSAR: Efficient Leverage Score Sampling Algorithm for the Analysis of Big Time Series Data
Viaarxiv icon

Richer priors for infinitely wide multi-layer perceptrons

Add code
Nov 29, 2019
Figure 1 for Richer priors for infinitely wide multi-layer perceptrons
Figure 2 for Richer priors for infinitely wide multi-layer perceptrons
Figure 3 for Richer priors for infinitely wide multi-layer perceptrons
Figure 4 for Richer priors for infinitely wide multi-layer perceptrons
Viaarxiv icon

Limit theorems for out-of-sample extensions of the adjacency and Laplacian spectral embeddings

Add code
Sep 29, 2019
Figure 1 for Limit theorems for out-of-sample extensions of the adjacency and Laplacian spectral embeddings
Figure 2 for Limit theorems for out-of-sample extensions of the adjacency and Laplacian spectral embeddings
Figure 3 for Limit theorems for out-of-sample extensions of the adjacency and Laplacian spectral embeddings
Figure 4 for Limit theorems for out-of-sample extensions of the adjacency and Laplacian spectral embeddings
Viaarxiv icon

Implicit Langevin Algorithms for Sampling From Log-concave Densities

Add code
Mar 29, 2019
Figure 1 for Implicit Langevin Algorithms for Sampling From Log-concave Densities
Figure 2 for Implicit Langevin Algorithms for Sampling From Log-concave Densities
Figure 3 for Implicit Langevin Algorithms for Sampling From Log-concave Densities
Figure 4 for Implicit Langevin Algorithms for Sampling From Log-concave Densities
Viaarxiv icon

DINGO: Distributed Newton-Type Method for Gradient-Norm Optimization

Add code
Jan 16, 2019
Figure 1 for DINGO: Distributed Newton-Type Method for Gradient-Norm Optimization
Figure 2 for DINGO: Distributed Newton-Type Method for Gradient-Norm Optimization
Figure 3 for DINGO: Distributed Newton-Type Method for Gradient-Norm Optimization
Figure 4 for DINGO: Distributed Newton-Type Method for Gradient-Norm Optimization
Viaarxiv icon