Hello there! I'm a research scientist at DeepMind, where I work on reinforcement learning and representation learning.
In my spare time, I blog about things that interest me.

Recent Publications

I've put some recent papers below. A full list of my publications can be found here.

Revisiting the train loss: an efficient performance estimator for neural architecture search. Binxin Ru*, Clare Lyle*, Lisa Schut, Mark van der Wilk, and Yarin Gal. NeurIPS 2021. (arXiv link).

Provable Guarantees on the Robustness of Decision Rules to Causal Interventions Benjie Wang*, Clare Lyle*, Marta Kwiatkowska. IJCAI 2021. (arXiv link).

On The Effect of Auxiliary Tasks on Representation Dynamics Clare Lyle*, Mark Rowland*, Georg Ostrovski, Will Dabney, AISTATS 2021. (arXiv link).

Talks

I've given a few talks during my PhD. Most of these weren't recorded, but the slides might still be useful to give a narrative overview of some of the projects I've worked on.

Simons Institute Workshop on Theory of Deep RL, October 2020. Invariant Prediction for Generalization in RL | Talk recording | Slides

GenU Workshop, October 2021. Bayesian Model Selection and Generalization in Deep Learning | Slides

G-Research ML College, November 2021. The Many Faces of Model Selection | Slides

OxCSML Group, December 2021. Representation Dynamics and Feature Collapse in RL | Slides

Blog

Warning! This blog is not optimized for mobile devices and contains a lot of long equations that will be a pain to view on your phone. Unless you enjoy suffering, I strongly recommend reading it on large screens only.