Alert button
Picture for Sebastian Peitz

Sebastian Peitz

Alert button

On the continuity and smoothness of the value function in reinforcement learning and optimal control

Add code
Bookmark button
Alert button
Mar 21, 2024
Hans Harder, Sebastian Peitz

Figure 1 for On the continuity and smoothness of the value function in reinforcement learning and optimal control
Figure 2 for On the continuity and smoothness of the value function in reinforcement learning and optimal control
Figure 3 for On the continuity and smoothness of the value function in reinforcement learning and optimal control
Viaarxiv icon

A multiobjective continuation method to compute the regularization path of deep neural networks

Add code
Bookmark button
Alert button
Aug 24, 2023
Augustina C. Amakor, Konstantin Sonntag, Sebastian Peitz

Viaarxiv icon

Partial observations, coarse graining and equivariance in Koopman operator theory for large-scale dynamical systems

Add code
Bookmark button
Alert button
Jul 28, 2023
Sebastian Peitz, Hans Harder, Feliks Nüske, Friedrich Philipp, Manuel Schaller, Karl Worthmann

Figure 1 for Partial observations, coarse graining and equivariance in Koopman operator theory for large-scale dynamical systems
Figure 2 for Partial observations, coarse graining and equivariance in Koopman operator theory for large-scale dynamical systems
Figure 3 for Partial observations, coarse graining and equivariance in Koopman operator theory for large-scale dynamical systems
Figure 4 for Partial observations, coarse graining and equivariance in Koopman operator theory for large-scale dynamical systems
Viaarxiv icon

Learning a model is paramount for sample efficiency in reinforcement learning control of PDEs

Add code
Bookmark button
Alert button
Feb 14, 2023
Stefan Werner, Sebastian Peitz

Figure 1 for Learning a model is paramount for sample efficiency in reinforcement learning control of PDEs
Figure 2 for Learning a model is paramount for sample efficiency in reinforcement learning control of PDEs
Figure 3 for Learning a model is paramount for sample efficiency in reinforcement learning control of PDEs
Figure 4 for Learning a model is paramount for sample efficiency in reinforcement learning control of PDEs
Viaarxiv icon

Distributed Control of Partial Differential Equations Using Convolutional Reinforcement Learning

Add code
Bookmark button
Alert button
Jan 25, 2023
Sebastian Peitz, Jan Stenner, Vikas Chidananda, Oliver Wallscheid, Steven L. Brunton, Kunihiko Taira

Figure 1 for Distributed Control of Partial Differential Equations Using Convolutional Reinforcement Learning
Figure 2 for Distributed Control of Partial Differential Equations Using Convolutional Reinforcement Learning
Figure 3 for Distributed Control of Partial Differential Equations Using Convolutional Reinforcement Learning
Figure 4 for Distributed Control of Partial Differential Equations Using Convolutional Reinforcement Learning
Viaarxiv icon

Learning Bilinear Models of Actuated Koopman Generators from Partially-Observed Trajectories

Add code
Bookmark button
Alert button
Sep 20, 2022
Samuel E. Otto, Sebastian Peitz, Clarence W. Rowley

Figure 1 for Learning Bilinear Models of Actuated Koopman Generators from Partially-Observed Trajectories
Figure 2 for Learning Bilinear Models of Actuated Koopman Generators from Partially-Observed Trajectories
Figure 3 for Learning Bilinear Models of Actuated Koopman Generators from Partially-Observed Trajectories
Figure 4 for Learning Bilinear Models of Actuated Koopman Generators from Partially-Observed Trajectories
Viaarxiv icon

Efficient time stepping for numerical integration using reinforcement learning

Add code
Bookmark button
Alert button
Apr 08, 2021
Michael Dellnitz, Eyke Hüllermeier, Marvin Lücke, Sina Ober-Blöbaum, Christian Offen, Sebastian Peitz, Karlson Pfannschmidt

Figure 1 for Efficient time stepping for numerical integration using reinforcement learning
Figure 2 for Efficient time stepping for numerical integration using reinforcement learning
Figure 3 for Efficient time stepping for numerical integration using reinforcement learning
Figure 4 for Efficient time stepping for numerical integration using reinforcement learning
Viaarxiv icon

On the Universal Transformation of Data-Driven Models to Control Systems

Add code
Bookmark button
Alert button
Feb 09, 2021
Sebastian Peitz, Katharina Bieker

Figure 1 for On the Universal Transformation of Data-Driven Models to Control Systems
Figure 2 for On the Universal Transformation of Data-Driven Models to Control Systems
Figure 3 for On the Universal Transformation of Data-Driven Models to Control Systems
Figure 4 for On the Universal Transformation of Data-Driven Models to Control Systems
Viaarxiv icon

On the Treatment of Optimization Problems with L1 Penalty Terms via Multiobjective Continuation

Add code
Bookmark button
Alert button
Dec 14, 2020
Katharina Bieker, Bennet Gebken, Sebastian Peitz

Figure 1 for On the Treatment of Optimization Problems with L1 Penalty Terms via Multiobjective Continuation
Figure 2 for On the Treatment of Optimization Problems with L1 Penalty Terms via Multiobjective Continuation
Figure 3 for On the Treatment of Optimization Problems with L1 Penalty Terms via Multiobjective Continuation
Figure 4 for On the Treatment of Optimization Problems with L1 Penalty Terms via Multiobjective Continuation
Viaarxiv icon

Data-driven approximation of the Koopman generator: Model reduction, system identification, and control

Add code
Bookmark button
Alert button
Sep 23, 2019
Stefan Klus, Feliks Nüske, Sebastian Peitz, Jan-Hendrik Niemann, Cecilia Clementi, Christof Schütte

Figure 1 for Data-driven approximation of the Koopman generator: Model reduction, system identification, and control
Figure 2 for Data-driven approximation of the Koopman generator: Model reduction, system identification, and control
Figure 3 for Data-driven approximation of the Koopman generator: Model reduction, system identification, and control
Figure 4 for Data-driven approximation of the Koopman generator: Model reduction, system identification, and control
Viaarxiv icon