Picture for Julian Kranz

Julian Kranz

PADAM: Parallel averaged Adam reduces the error for stochastic optimization in scientific machine learning

Add code
May 28, 2025
Figure 1 for PADAM: Parallel averaged Adam reduces the error for stochastic optimization in scientific machine learning
Figure 2 for PADAM: Parallel averaged Adam reduces the error for stochastic optimization in scientific machine learning
Figure 3 for PADAM: Parallel averaged Adam reduces the error for stochastic optimization in scientific machine learning
Figure 4 for PADAM: Parallel averaged Adam reduces the error for stochastic optimization in scientific machine learning
Viaarxiv icon

SAD Neural Networks: Divergent Gradient Flows and Asymptotic Optimality via o-minimal Structures

Add code
May 14, 2025
Figure 1 for SAD Neural Networks: Divergent Gradient Flows and Asymptotic Optimality via o-minimal Structures
Figure 2 for SAD Neural Networks: Divergent Gradient Flows and Asymptotic Optimality via o-minimal Structures
Figure 3 for SAD Neural Networks: Divergent Gradient Flows and Asymptotic Optimality via o-minimal Structures
Figure 4 for SAD Neural Networks: Divergent Gradient Flows and Asymptotic Optimality via o-minimal Structures
Viaarxiv icon