Picture for Marc Finzi

Marc Finzi

Compute Better Spent: Replacing Dense Layers with Structured Matrices

Add code
Jun 10, 2024
Viaarxiv icon

Non-Vacuous Generalization Bounds for Large Language Models

Add code
Dec 28, 2023
Viaarxiv icon

Large Language Models Are Zero-Shot Time Series Forecasters

Add code
Oct 11, 2023
Viaarxiv icon

CoLA: Exploiting Compositional Structure for Automatic and Efficient Numerical Linear Algebra

Add code
Sep 06, 2023
Figure 1 for CoLA: Exploiting Compositional Structure for Automatic and Efficient Numerical Linear Algebra
Figure 2 for CoLA: Exploiting Compositional Structure for Automatic and Efficient Numerical Linear Algebra
Figure 3 for CoLA: Exploiting Compositional Structure for Automatic and Efficient Numerical Linear Algebra
Figure 4 for CoLA: Exploiting Compositional Structure for Automatic and Efficient Numerical Linear Algebra
Viaarxiv icon

User-defined Event Sampling and Uncertainty Quantification in Diffusion Models for Physical Dynamical Systems

Jun 13, 2023
Figure 1 for User-defined Event Sampling and Uncertainty Quantification in Diffusion Models for Physical Dynamical Systems
Figure 2 for User-defined Event Sampling and Uncertainty Quantification in Diffusion Models for Physical Dynamical Systems
Figure 3 for User-defined Event Sampling and Uncertainty Quantification in Diffusion Models for Physical Dynamical Systems
Figure 4 for User-defined Event Sampling and Uncertainty Quantification in Diffusion Models for Physical Dynamical Systems
Viaarxiv icon

A Stable and Scalable Method for Solving Initial Value PDEs with Neural Networks

Add code
Apr 28, 2023
Figure 1 for A Stable and Scalable Method for Solving Initial Value PDEs with Neural Networks
Figure 2 for A Stable and Scalable Method for Solving Initial Value PDEs with Neural Networks
Figure 3 for A Stable and Scalable Method for Solving Initial Value PDEs with Neural Networks
Figure 4 for A Stable and Scalable Method for Solving Initial Value PDEs with Neural Networks
Viaarxiv icon

The No Free Lunch Theorem, Kolmogorov Complexity, and the Role of Inductive Biases in Machine Learning

Add code
Apr 11, 2023
Figure 1 for The No Free Lunch Theorem, Kolmogorov Complexity, and the Role of Inductive Biases in Machine Learning
Figure 2 for The No Free Lunch Theorem, Kolmogorov Complexity, and the Role of Inductive Biases in Machine Learning
Figure 3 for The No Free Lunch Theorem, Kolmogorov Complexity, and the Role of Inductive Biases in Machine Learning
Figure 4 for The No Free Lunch Theorem, Kolmogorov Complexity, and the Role of Inductive Biases in Machine Learning
Viaarxiv icon

PAC-Bayes Compression Bounds So Tight That They Can Explain Generalization

Add code
Nov 24, 2022
Figure 1 for PAC-Bayes Compression Bounds So Tight That They Can Explain Generalization
Figure 2 for PAC-Bayes Compression Bounds So Tight That They Can Explain Generalization
Figure 3 for PAC-Bayes Compression Bounds So Tight That They Can Explain Generalization
Figure 4 for PAC-Bayes Compression Bounds So Tight That They Can Explain Generalization
Viaarxiv icon

The Lie Derivative for Measuring Learned Equivariance

Add code
Oct 06, 2022
Figure 1 for The Lie Derivative for Measuring Learned Equivariance
Figure 2 for The Lie Derivative for Measuring Learned Equivariance
Figure 3 for The Lie Derivative for Measuring Learned Equivariance
Figure 4 for The Lie Derivative for Measuring Learned Equivariance
Viaarxiv icon

Deconstructing the Inductive Biases of Hamiltonian Neural Networks

Add code
Feb 12, 2022
Figure 1 for Deconstructing the Inductive Biases of Hamiltonian Neural Networks
Figure 2 for Deconstructing the Inductive Biases of Hamiltonian Neural Networks
Figure 3 for Deconstructing the Inductive Biases of Hamiltonian Neural Networks
Figure 4 for Deconstructing the Inductive Biases of Hamiltonian Neural Networks
Viaarxiv icon