Alert button
Picture for Judith Rousseau

Judith Rousseau

Alert button

Posterior Contraction Rates for Matérn Gaussian Processes on Riemannian Manifolds

Add code
Bookmark button
Alert button
Sep 22, 2023
Paul Rosa, Viacheslav Borovitskiy, Alexander Terenin, Judith Rousseau

Viaarxiv icon

Scalable Variational Bayes methods for Hawkes processes

Add code
Bookmark button
Alert button
Dec 01, 2022
Deborah Sulem, Vincent Rivoirard, Judith Rousseau

Figure 1 for Scalable Variational Bayes methods for Hawkes processes
Figure 2 for Scalable Variational Bayes methods for Hawkes processes
Figure 3 for Scalable Variational Bayes methods for Hawkes processes
Figure 4 for Scalable Variational Bayes methods for Hawkes processes
Viaarxiv icon

Fast Bayesian Coresets via Subsampling and Quasi-Newton Refinement

Add code
Bookmark button
Alert button
Mar 18, 2022
Cian Naik, Judith Rousseau, Trevor Campbell

Figure 1 for Fast Bayesian Coresets via Subsampling and Quasi-Newton Refinement
Figure 2 for Fast Bayesian Coresets via Subsampling and Quasi-Newton Refinement
Figure 3 for Fast Bayesian Coresets via Subsampling and Quasi-Newton Refinement
Figure 4 for Fast Bayesian Coresets via Subsampling and Quasi-Newton Refinement
Viaarxiv icon

Stable ResNet

Add code
Bookmark button
Alert button
Oct 24, 2020
Soufiane Hayou, Eugenio Clerico, Bobby He, George Deligiannidis, Arnaud Doucet, Judith Rousseau

Figure 1 for Stable ResNet
Figure 2 for Stable ResNet
Figure 3 for Stable ResNet
Figure 4 for Stable ResNet
Viaarxiv icon

Training Dynamics of Deep Networks using Stochastic Gradient Descent via Neural Tangent Kernel

Add code
Bookmark button
Alert button
Jun 07, 2019
Soufiane Hayou, Arnaud Doucet, Judith Rousseau

Figure 1 for Training Dynamics of Deep Networks using Stochastic Gradient Descent via Neural Tangent Kernel
Figure 2 for Training Dynamics of Deep Networks using Stochastic Gradient Descent via Neural Tangent Kernel
Figure 3 for Training Dynamics of Deep Networks using Stochastic Gradient Descent via Neural Tangent Kernel
Figure 4 for Training Dynamics of Deep Networks using Stochastic Gradient Descent via Neural Tangent Kernel
Viaarxiv icon

On the Impact of the Activation Function on Deep Neural Networks Training

Add code
Bookmark button
Alert button
Feb 19, 2019
Soufiane Hayou, Arnaud Doucet, Judith Rousseau

Figure 1 for On the Impact of the Activation Function on Deep Neural Networks Training
Figure 2 for On the Impact of the Activation Function on Deep Neural Networks Training
Figure 3 for On the Impact of the Activation Function on Deep Neural Networks Training
Viaarxiv icon

On the Selection of Initialization and Activation Function for Deep Neural Networks

Add code
Bookmark button
Alert button
Oct 07, 2018
Soufiane Hayou, Arnaud Doucet, Judith Rousseau

Figure 1 for On the Selection of Initialization and Activation Function for Deep Neural Networks
Figure 2 for On the Selection of Initialization and Activation Function for Deep Neural Networks
Figure 3 for On the Selection of Initialization and Activation Function for Deep Neural Networks
Figure 4 for On the Selection of Initialization and Activation Function for Deep Neural Networks
Viaarxiv icon

Bayesian matrix completion: prior specification

Add code
Bookmark button
Alert button
Oct 22, 2014
Pierre Alquier, Vincent Cottet, Nicolas Chopin, Judith Rousseau

Figure 1 for Bayesian matrix completion: prior specification
Figure 2 for Bayesian matrix completion: prior specification
Figure 3 for Bayesian matrix completion: prior specification
Figure 4 for Bayesian matrix completion: prior specification
Viaarxiv icon