Alert button
Picture for Satoshi Hayakawa

Satoshi Hayakawa

Alert button

A Quadrature Approach for General-Purpose Batch Bayesian Optimization via Probabilistic Lifting

Add code
Bookmark button
Alert button
Apr 19, 2024
Masaki Adachi, Satoshi Hayakawa, Martin Jørgensen, Saad Hamid, Harald Oberhauser, Michael A. Osborne

Viaarxiv icon

Policy Gradient with Kernel Quadrature

Add code
Bookmark button
Alert button
Oct 23, 2023
Satoshi Hayakawa, Tetsuro Morimura

Viaarxiv icon

Domain-Agnostic Batch Bayesian Optimization with Diverse Constraints via Bayesian Quadrature

Add code
Bookmark button
Alert button
Jun 09, 2023
Masaki Adachi, Satoshi Hayakawa, Xingchen Wan, Martin Jørgensen, Harald Oberhauser, Michael A. Osborne

Figure 1 for Domain-Agnostic Batch Bayesian Optimization with Diverse Constraints via Bayesian Quadrature
Figure 2 for Domain-Agnostic Batch Bayesian Optimization with Diverse Constraints via Bayesian Quadrature
Figure 3 for Domain-Agnostic Batch Bayesian Optimization with Diverse Constraints via Bayesian Quadrature
Figure 4 for Domain-Agnostic Batch Bayesian Optimization with Diverse Constraints via Bayesian Quadrature
Viaarxiv icon

SOBER: Scalable Batch Bayesian Optimization and Quadrature using Recombination Constraints

Add code
Bookmark button
Alert button
Jan 30, 2023
Masaki Adachi, Satoshi Hayakawa, Saad Hamid, Martin Jørgensen, Harald Oberhauser, Micheal A. Osborne

Figure 1 for SOBER: Scalable Batch Bayesian Optimization and Quadrature using Recombination Constraints
Figure 2 for SOBER: Scalable Batch Bayesian Optimization and Quadrature using Recombination Constraints
Figure 3 for SOBER: Scalable Batch Bayesian Optimization and Quadrature using Recombination Constraints
Figure 4 for SOBER: Scalable Batch Bayesian Optimization and Quadrature using Recombination Constraints
Viaarxiv icon

Quantum Ridgelet Transform: Winning Lottery Ticket of Neural Networks with Quantum Computation

Add code
Bookmark button
Alert button
Jan 27, 2023
Hayata Yamasaki, Sathyawageeswar Subramanian, Satoshi Hayakawa, Sho Sonoda

Figure 1 for Quantum Ridgelet Transform: Winning Lottery Ticket of Neural Networks with Quantum Computation
Figure 2 for Quantum Ridgelet Transform: Winning Lottery Ticket of Neural Networks with Quantum Computation
Figure 3 for Quantum Ridgelet Transform: Winning Lottery Ticket of Neural Networks with Quantum Computation
Viaarxiv icon

Sampling-based Nyström Approximation and Kernel Quadrature

Add code
Bookmark button
Alert button
Jan 23, 2023
Satoshi Hayakawa, Harald Oberhauser, Terry Lyons

Figure 1 for Sampling-based Nyström Approximation and Kernel Quadrature
Figure 2 for Sampling-based Nyström Approximation and Kernel Quadrature
Viaarxiv icon

Fast Bayesian Inference with Batch Bayesian Quadrature via Kernel Recombination

Add code
Bookmark button
Alert button
Jun 09, 2022
Masaki Adachi, Satoshi Hayakawa, Martin Jørgensen, Harald Oberhauser, Michael A. Osborne

Figure 1 for Fast Bayesian Inference with Batch Bayesian Quadrature via Kernel Recombination
Figure 2 for Fast Bayesian Inference with Batch Bayesian Quadrature via Kernel Recombination
Figure 3 for Fast Bayesian Inference with Batch Bayesian Quadrature via Kernel Recombination
Figure 4 for Fast Bayesian Inference with Batch Bayesian Quadrature via Kernel Recombination
Viaarxiv icon

Positively Weighted Kernel Quadrature via Subsampling

Add code
Bookmark button
Alert button
Jul 20, 2021
Satoshi Hayakawa, Harald Oberhauser, Terry Lyons

Figure 1 for Positively Weighted Kernel Quadrature via Subsampling
Figure 2 for Positively Weighted Kernel Quadrature via Subsampling
Figure 3 for Positively Weighted Kernel Quadrature via Subsampling
Figure 4 for Positively Weighted Kernel Quadrature via Subsampling
Viaarxiv icon

On the minimax optimality and superiority of deep neural network learning over sparse parameter spaces

Add code
Bookmark button
Alert button
May 22, 2019
Satoshi Hayakawa, Taiji Suzuki

Figure 1 for On the minimax optimality and superiority of deep neural network learning over sparse parameter spaces
Figure 2 for On the minimax optimality and superiority of deep neural network learning over sparse parameter spaces
Viaarxiv icon