Alert button
Picture for Fred Roosta

Fred Roosta

Alert button

Stochastic Normalizing Flows

Add code
Bookmark button
Alert button
Feb 21, 2020
Liam Hodgkinson, Chris van der Heide, Fred Roosta, Michael W. Mahoney

Figure 1 for Stochastic Normalizing Flows
Figure 2 for Stochastic Normalizing Flows
Figure 3 for Stochastic Normalizing Flows
Figure 4 for Stochastic Normalizing Flows
Viaarxiv icon

Avoiding Kernel Fixed Points: Computing with ELU and GELU Infinite Networks

Add code
Bookmark button
Alert button
Feb 20, 2020
Russell Tsuchida, Tim Pearce, Christopher Van Der Heide, Fred Roosta, Marcus Gallagher

Figure 1 for Avoiding Kernel Fixed Points: Computing with ELU and GELU Infinite Networks
Figure 2 for Avoiding Kernel Fixed Points: Computing with ELU and GELU Infinite Networks
Figure 3 for Avoiding Kernel Fixed Points: Computing with ELU and GELU Infinite Networks
Figure 4 for Avoiding Kernel Fixed Points: Computing with ELU and GELU Infinite Networks
Viaarxiv icon

The reproducing Stein kernel approach for post-hoc corrected sampling

Add code
Bookmark button
Alert button
Jan 25, 2020
Liam Hodgkinson, Robert Salomone, Fred Roosta

Figure 1 for The reproducing Stein kernel approach for post-hoc corrected sampling
Figure 2 for The reproducing Stein kernel approach for post-hoc corrected sampling
Figure 3 for The reproducing Stein kernel approach for post-hoc corrected sampling
Viaarxiv icon

LSAR: Efficient Leverage Score Sampling Algorithm for the Analysis of Big Time Series Data

Add code
Bookmark button
Alert button
Dec 26, 2019
Ali Eshragh, Fred Roosta, Asef Nazari, Michael W. Mahoney

Figure 1 for LSAR: Efficient Leverage Score Sampling Algorithm for the Analysis of Big Time Series Data
Figure 2 for LSAR: Efficient Leverage Score Sampling Algorithm for the Analysis of Big Time Series Data
Figure 3 for LSAR: Efficient Leverage Score Sampling Algorithm for the Analysis of Big Time Series Data
Figure 4 for LSAR: Efficient Leverage Score Sampling Algorithm for the Analysis of Big Time Series Data
Viaarxiv icon

Richer priors for infinitely wide multi-layer perceptrons

Add code
Bookmark button
Alert button
Nov 29, 2019
Russell Tsuchida, Fred Roosta, Marcus Gallagher

Figure 1 for Richer priors for infinitely wide multi-layer perceptrons
Figure 2 for Richer priors for infinitely wide multi-layer perceptrons
Figure 3 for Richer priors for infinitely wide multi-layer perceptrons
Figure 4 for Richer priors for infinitely wide multi-layer perceptrons
Viaarxiv icon

Limit theorems for out-of-sample extensions of the adjacency and Laplacian spectral embeddings

Add code
Bookmark button
Alert button
Sep 29, 2019
Keith Levin, Fred Roosta, Minh Tang, Michael W. Mahoney, Carey E. Priebe

Figure 1 for Limit theorems for out-of-sample extensions of the adjacency and Laplacian spectral embeddings
Figure 2 for Limit theorems for out-of-sample extensions of the adjacency and Laplacian spectral embeddings
Figure 3 for Limit theorems for out-of-sample extensions of the adjacency and Laplacian spectral embeddings
Figure 4 for Limit theorems for out-of-sample extensions of the adjacency and Laplacian spectral embeddings
Viaarxiv icon

Implicit Langevin Algorithms for Sampling From Log-concave Densities

Add code
Bookmark button
Alert button
Mar 29, 2019
Liam Hodgkinson, Robert Salomone, Fred Roosta

Figure 1 for Implicit Langevin Algorithms for Sampling From Log-concave Densities
Figure 2 for Implicit Langevin Algorithms for Sampling From Log-concave Densities
Figure 3 for Implicit Langevin Algorithms for Sampling From Log-concave Densities
Figure 4 for Implicit Langevin Algorithms for Sampling From Log-concave Densities
Viaarxiv icon

DINGO: Distributed Newton-Type Method for Gradient-Norm Optimization

Add code
Bookmark button
Alert button
Jan 16, 2019
Rixon Crane, Fred Roosta

Figure 1 for DINGO: Distributed Newton-Type Method for Gradient-Norm Optimization
Figure 2 for DINGO: Distributed Newton-Type Method for Gradient-Norm Optimization
Figure 3 for DINGO: Distributed Newton-Type Method for Gradient-Norm Optimization
Figure 4 for DINGO: Distributed Newton-Type Method for Gradient-Norm Optimization
Viaarxiv icon

Exchangeability and Kernel Invariance in Trained MLPs

Add code
Bookmark button
Alert button
Oct 27, 2018
Russell Tsuchida, Fred Roosta, Marcus Gallagher

Figure 1 for Exchangeability and Kernel Invariance in Trained MLPs
Figure 2 for Exchangeability and Kernel Invariance in Trained MLPs
Figure 3 for Exchangeability and Kernel Invariance in Trained MLPs
Figure 4 for Exchangeability and Kernel Invariance in Trained MLPs
Viaarxiv icon