&

Director of Science & Innovation—Grand Challenges (Environment & Sustainability)

Professor Marc Deisenroth is the Google DeepMind Chair of Machine Learning and Artificial Intelligence at University College London, part of the UNESCO Chair on Artificial Intelligence at UCL, and Director of Science & Innovation—Grand Challenges (Environment & Sustainability) at The Alan Turing Institute. He also holds a visiting faculty position at the University of Johannesburg. Marc co-leads the Sustainability and Machine Learning Group at UCL. His research interests center around data-efficient machine learning, probabilistic modeling and autonomous decision making with applications in climate/weather science, nuclear fusion, and robotics.

Marc was Program Chair of EWRL 2012, Workshops Chair of RSS 2013, EXPO Chair at ICML 2020, Tutorials Chair at NeurIPS 2021, and Program Chair at ICLR 2022. He is an elected member of the ICML Board. He received Paper Awards at ICRA 2014, ICCAS 2016, ICML 2020, AISTATS 2021, and FAccT 2023. In 2019, Marc co-organized the Machine Learning Summer School in London.

In 2018, Marc received The President’s Award for Outstanding Early Career Researcher at Imperial College. He is a recipient of a Google Faculty Research Award and a Microsoft PhD Grant.

In 2018, Marc spent four months at the African Institute for Mathematical Sciences (Rwanda), where he taught a course on Foundations of Machine Learning as part of the African Masters in Machine Intelligence. He is co-author of the book Mathematics for Machine Learning, published by Cambridge University Press.

**Machine Learning:** Data-efficient machine learning, Gaussian processes, reinforcement learning, Bayesian optimization, approximate inference, deep probabilistic models, geo-spatial models

**Robotics and Control:** Robot learning, legged locomotion, planning under uncertainty, imitation learning, adaptive control, robust control, learning control, optimal control

**Signal Processing:** Nonlinear state estimation, Kalman filtering, time-series modeling, dynamical systems, system identification, stochastic information processing

Iterative State Estimation in Non-linear Dynamical Systems Using Approximate Expectation Propagation

Bayesian inference in non-linear dynamical systems seeks to find good posterior approximations of a latent state given a sequence of observations. Gaussian filters and smoothers, including the (extended/unscented) Kalman filter/smoother, which are commonly used in engineering applications, yield Gaussian posteriors on the latent state. While they are computationally efficient, they are often criticised for their crude approximation of the posterior state distribution. In this paper, we address this criticism by proposing a message passing scheme for iterative state estimation in non-linear dynamical systems, which yields more informative (Gaussian) posteriors on the latent states. Our message passing scheme is based on expectation propagation (EP). We prove that classical Rauch–Tung–Striebel (RTS) smoothers, such as the extended Kalman smoother (EKS) or the unscented Kalman smoother (UKS), are special cases of our message passing scheme. Running the message passing scheme more than once can lead to significant improvements of the classical RTS smoothers, so that more informative state estimates can be obtained. We address potential convergence issues of EP by generalising our state estimation framework to damped updates and the consideration of general alpha-divergences.

A Unifying Variational Framework for Gaussian Process Motion Planning.
*Proceedings of the International Conference on Artificial Intelligence and Statistics (AISTATS)*.

(2024).
Thin and Deep Gaussian Processes.
*Advances in Neural Information Processing Systems*.

(2023).
Neural Field Movement Primitives for Joint Modelling of Scenes and Motions.
*Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)*.

(2023).
Sliding Touch-based Exploration for Modeling Unknown Object Shape with Multi-finger Hands.
*Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)*.

(2023).
Safe Trajectory Sampling in Model-based Reinforcement Learning.
*Proceedings of the International Conference on Automation Science and Engineering (CASE)*.

(2023).