Dr Adrian Bishop
This talk will introduce a newly proposed framework for nonlinear stochastic optimal control. Roughly speaking, the optimal controller in this nonlinear stochastic setting is related to the solution of a deterministic partial differential equation (PDE): the Hamilton-Jacobi-Bellman (HJB) equation. We will discuss how the solution to the HJB equation can be formulated in terms of an expectation over a stochastic trajectory defined by an uncontrolled stochastic differential equation. Indeed, this relationship is a simple consequence of the Feynman-Kac formula (and more generally its nonlinear variants). It then follows that this expectation (or path-integral) can be approximated via Monte Carlo simulation and we show specifically how this leads to a Monte Carlo approximation of the optimal controller. We discuss the details involved in carrying out this Monte Carlo simulation; some of which are particular to the control framework. We discuss the stability of the controlled system given the approximated controller. A number of other recent extensions and (open) discussion points will also be touched upon. And more generally, we hope to stimulate further interest in this topic.
The seminar will be followed by wine and finger food with the speaker. All attendees are welcome!