Professor Marc Deisenroth is the DeepMind Chair of Machine Learning and Artificial Intelligence at University College London and the Deputy Director of the UCL Centre for Artificial Intelligence. He also holds a visiting faculty position at the University of Johannesburg. Marc leads the Statistical Machine Learning Group at UCL. His research interests center around data-efficient machine learning, probabilistic modeling and autonomous decision making with applications in climate/weather science and robotics.
Marc was Program Chair of EWRL 2012, Workshops Chair of RSS 2013, EXPO Chair at ICML 2020, Tutorials Chair at NeurIPS 2021, and Program Chair at ICLR 2022. He is an elected member of the ICML Board. He received Paper Awards at ICRA 2014, ICCAS 2016, ICML 2020, and AISTATS 2021. In 2019, Marc co-organized the Machine Learning Summer School in London.
In 2018, Marc received The President’s Award for Outstanding Early Career Researcher at Imperial College. He is a recipient of a Google Faculty Research Award and a Microsoft PhD Grant.
In 2018, Marc spent four months at the African Institute for Mathematical Sciences (Rwanda), where he taught a course on Foundations of Machine Learning as part of the African Masters in Machine Intelligence. He is co-author of the book Mathematics for Machine Learning, published by Cambridge University Press.
Machine Learning: Data-efficient machine learning, Gaussian processes, reinforcement learning, Bayesian optimization, approximate inference, deep probabilistic models
Robotics and Control: Robot learning, legged locomotion, planning under uncertainty, imitation learning, adaptive control, robust control, learning control, optimal control
Signal Processing: Nonlinear state estimation, Kalman filtering, time-series modeling, dynamical systems, system identification, stochastic information processing
Data efficiency, i.e., learning from small datasets, is of practical importance in many real-world applications and decision-making systems. Data efficiency can be achieved in multiple ways, such as probabilistic modeling, where models and predictions are equipped with meaningful uncertainty estimates, Bayesian optimization, transfer learning, or the incorporation of valuable prior knowledge. In this talk, I will focus on how robot learning can benefit from data-efficient learning algorithms. In particular, I will motivate the use of Gaussian process models for learning good policies in three different settings: model-based reinforcement learning, transfer learning, and (if time permits) Bayesian optimization.
Bayesian optimization is a useful tool for fast optimization of black-box functions. Typically, Bayesian optimization relies on Gaussian processes as a surrogate model for the unknown function, which can then be used to find a good trade-off between exploitation of current knowledge and exploration. In the first part of the tutorial, I will give an introduction to Gaussian processes; in the second part, we will then get into the details of Bayesian optimization, using some of the knowledge from Part 1.
Bayesian inference in non-linear dynamical systems seeks to find good posterior approximations of a latent state given a sequence of observations. Gaussian filters and smoothers, including the (extended/unscented) Kalman filter/smoother, which are commonly used in engineering applications, yield Gaussian posteriors on the latent state. While they are computationally efficient, they are often criticised for their crude approximation of the posterior state distribution. In this paper, we address this criticism by proposing a message passing scheme for iterative state estimation in non-linear dynamical systems, which yields more informative (Gaussian) posteriors on the latent states. Our message passing scheme is based on expectation propagation (EP). We prove that classical Rauch–Tung–Striebel (RTS) smoothers, such as the extended Kalman smoother (EKS) or the unscented Kalman smoother (UKS), are special cases of our message passing scheme. Running the message passing scheme more than once can lead to significant improvements of the classical RTS smoothers, so that more informative state estimates can be obtained. We address potential convergence issues of EP by generalising our state estimation framework to damped updates and the consideration of general alpha-divergences.
Vector-valued Gaussian Processes on Riemannian Manifolds via Gauge Independent Projected Kernels
As Gaussian processes are used to answer increasingly complex questions, analytic solutions become scarcer and scarcer. Monte Carlo methods act as a convenient bridge for connecting intractable mathematical expressions with actionable estimates via sampling. Conventional approaches for simulating Gaussian process posteriors view samples as draws from marginal distributions of process values at finite sets of input locations. This distribution-centric characterization leads to generative strategies that scale cubically in the size of the desired random vector. These methods are prohibitively expensive in cases where we would, ideally, like to draw high-dimensional vectors or even continuous sample paths. In this work, we investigate a different line of reasoning: rather than focusing on distributions, we articulate Gaussian conditionals at the level of random variables. We show how this pathwise interpretation of conditioning gives rise to a general family of approximations that lend themselves to efficiently sampling Gaussian process posteriors. Starting from first principles, we derive these methods and analyze the approximation errors they introduce. We, then, ground these results by exploring the practical implications of pathwise conditioning in various applied settings, such as global optimization and reinforcement learning.
Dynamic time warping (DTW) is a useful method for aligning, comparing and combining time series, but it requires them to live in comparable spaces. In this work, we consider a setting in which time series live on different spaces without a sensible ground metric, causing DTW to become ill-defined. To alleviate this, we propose Gromov dynamic time warping (GDTW), a distance between time series on potentially incomparable spaces that avoids the comparability requirement by instead considering intra-relational geometry. We demonstrate its effectiveness at aligning, combining and comparing time series living on incomparable spaces. We further propose a smoothed version of GDTW as a differentiable loss and assess its properties in a variety of settings, including barycentric averaging, generative modeling and imitation learning.
Learning physically structured representations of dynamical systems that include contact between different objects is an important problem for deep learning based approaches in robotics. Black-box neural networks can learn to approximately represent discontinuous dynamics, but typically require impractical quantities of data, and often suffer from pathological behaviour when forecasting for longer time horizons. In this work, we use connections between deep neural networks and differential equations to design a family of deep network architectures for representing contact dynamics between objects. We show that these networks can learn discontinuous contact events in a data-efficient manner from noisy observations in settings which are traditionally difficult for black-box approaches and recent physics inspired neural networks. Our results indicate that an idealised form of touch feedback—which is heavily relied upon by biological systems—is a key component of making this learning problem tractable. Together with the inductive biases introduced through the network architectures, our techniques enable accurate learning of contact dynamics from physical data.