Picture for Takashi Furuya

Takashi Furuya

Transformers are Universal In-context Learners

Add code
Aug 02, 2024
Viaarxiv icon

Mixture of Experts Soften the Curse of Dimensionality in Operator Learning

Add code
Apr 13, 2024
Viaarxiv icon

Breaking the Curse of Dimensionality with Distributed Neural Computation

Add code
Feb 05, 2024
Viaarxiv icon

Convergences for Minimax Optimization Problems over Infinite-Dimensional Spaces Towards Stability in Adversarial Training

Add code
Dec 02, 2023
Viaarxiv icon

Globally injective and bijective neural operators

Add code
Jun 06, 2023
Viaarxiv icon

Fine-tuning Neural-Operator architectures for training and generalization

Add code
Jan 27, 2023
Figure 1 for Fine-tuning Neural-Operator architectures for training and generalization
Figure 2 for Fine-tuning Neural-Operator architectures for training and generalization
Figure 3 for Fine-tuning Neural-Operator architectures for training and generalization
Figure 4 for Fine-tuning Neural-Operator architectures for training and generalization
Viaarxiv icon

Variational Inference with Gaussian Mixture by Entropy Approximation

Add code
Feb 26, 2022
Figure 1 for Variational Inference with Gaussian Mixture by Entropy Approximation
Figure 2 for Variational Inference with Gaussian Mixture by Entropy Approximation
Figure 3 for Variational Inference with Gaussian Mixture by Entropy Approximation
Viaarxiv icon

Spectral Pruning for Recurrent Neural Networks

Add code
May 23, 2021
Figure 1 for Spectral Pruning for Recurrent Neural Networks
Figure 2 for Spectral Pruning for Recurrent Neural Networks
Viaarxiv icon