## About

Hello there! I'm a research scientist at DeepMind, where I work on reinforcement learning and representation learning.
In my spare time, I blog about things that interest me.

## Some Publications

I've put some recent papers below. A full list of my publications can be found here.

**Learning Dynamics and Generalization in Reinforcement Learning.** Clare Lyle, Mark Rowland, Will Dabney, Marta Kwiatkowska, Yarin Gal. ICML 2022, RLDM Spotlight. (arXiv link).

**Revisiting the train loss: an efficient performance estimator for neural architecture search.** Binxin Ru*, Clare Lyle*, Lisa Schut, Mark van der Wilk, and Yarin Gal. NeurIPS 2021. (arXiv link).

**On The Effect of Auxiliary Tasks on Representation Dynamics** Clare Lyle*, Mark Rowland*, Georg Ostrovski, Will Dabney, AISTATS 2021. (arXiv link).

## Talks

I've given a few talks over the years. Most of these weren't recorded, but the slides might still be useful to give a narrative overview of some of the projects I've worked on.

** Conference on Lifelong Learning Agents Keynote**, August 2023. Why do neural networks lose plasticity? | Recording.
**OxCSML Group**, December 2021. Representation Dynamics and Feature Collapse in RL | Slides
**G-Research ML College**, November 2021. The Many Faces of Model Selection | Slides
**GenU Workshop**, October 2021. Bayesian Model Selection and Generalization in Deep Learning | Slides
**Simons Institute Workshop on Theory of Deep RL**, October 2020. Invariant Prediction for Generalization in RL | Talk recording | Slides

## Blog

**Warning!** This blog is not optimized for mobile devices and contains a lot of long equations that will be a pain to view on your phone. Unless you enjoy suffering, I strongly recommend reading it on large screens only.