Karl Kunisch (U Graz/RICAM Linz): Solving the Hamilton Jacobi Bellman equation of optimal control: towards taming the curse of dimensionality
Optimal feedback controls for nonlinear systems are characterized by the solutions to a Hamilton Jacobi Bellmann (HJB) equation. In the deterministic case, this is a first order hyperbolic equation. Its dimension is that of the statespace of the nonlinear system. Thus solving the HJB equation is a formidable task and one is confronted with a curse of dimensionality. I give a brief survey of current solution strategies to partially cope with this challenging problem. Subsequently I describe two approaches in some detail. The first one is a data driven technique, which approximates the solution to the HJB equation and its gradient from an ensemble of open loop solves. The second technique circumvents the direct solution of the HJB equation. It is based on a succinctly chosen learning ansatz, with subsequent approximation of the feedback gains by neural networks or polynomial basis functions.
This work relies on collaborations with B. Azmi (Uni-Konstanz), D. Kalise(Imperial College), D. Vasquez-Varas (Ricam Austrian Academy Sciences), and D. Walter (Humbold Universität Berlin).
Lorenz Richter (Zuse Institut Berlin): An optimal control perspective on diffusion-based generative modeling leading to robust numerical methods
This talk establishes a connection between generative modeling based on SDEs and three classical fields of mathematics, namely stochastic optimal control, PDEs and path space measures. Those perspectives will be both of theoretical and practical value, for instance allowing to transfer methods from one to the respective other field or leading to novel algorithms for sampling from unnormalized densities. In particular, we provide a general framework by introducing a variational formulation based on divergences between path space measures of time-reversed diffusion processes. This abstract perspective can be related to the famous Schrödinger bridge problem and leads to practical losses that can be optimized by gradient-based algorithms. At the same time, it allows us to consider divergences other than the reverse Kullback-Leibler divergence that is known to suffer from mode collapse. We propose the so-called log-variance divergence, which exhibits favorable numerical properties and leads to significantly improved performance across multiple considered approaches.
Oliver Ernst (TU Chemnitz): Learing to Integrate
The parametrization of a random field by a countable or finite number of i.i.d. scalar random variables with a given simple distribution is essential for the numerical solution of differential equations with random distributed inputs, a key mathematical model in the propagation and quantification of uncertainty in technical or scientific computer simulations. The classical template of such representations is the Karhunen-Loève expansion, which provides a linear modal expansion of a second-order random field in weighted eigenfunctions of its covariance operator with with uncorrelated scalar random coefficients. For Gaussian random fields, these random coefficients are also independent and themselves Gaussian. For random fields with more complex probability laws, such as generalized or Lévy fields such representations are more challenging to construct.
In this work, we construct a parametric representation of a random field using an invertible neural network (INN) and combine this with a sparse grid collocation method in order to sample realizations of a quantity of interest associated with the solution of a stationary diffusion equation.
Co-Authors: Hanno Gottschalk (TU Berlin), Toni Kowalewitz (TU Chemnitz), Patrick Krüger (U Wuppertal)