Hello there! I'm a research scientist at DeepMind, where I work on reinforcement learning and representation learning. In my spare time, I blog about things that interest me.

Some Publications

I've put some recent papers below. A full list of my publications can be found here.

Learning Dynamics and Generalization in Reinforcement Learning. Clare Lyle, Mark Rowland, Will Dabney, Marta Kwiatkowska, Yarin Gal. ICML 2022, RLDM Spotlight. (arXiv link).

Revisiting the train loss: an efficient performance estimator for neural architecture search. Binxin Ru*, Clare Lyle*, Lisa Schut, Mark van der Wilk, and Yarin Gal. NeurIPS 2021. (arXiv link).

On The Effect of Auxiliary Tasks on Representation Dynamics Clare Lyle*, Mark Rowland*, Georg Ostrovski, Will Dabney, AISTATS 2021. (arXiv link).


I've given a few talks over the years. Most of these weren't recorded, but the slides might still be useful to give a narrative overview of some of the projects I've worked on.


Warning! This blog is not optimized for mobile devices and contains a lot of long equations that will be a pain to view on your phone. Unless you enjoy suffering, I strongly recommend reading it on large screens only.

Can you train a neural network forever? | Understanding trainability in neural networks

Deep dive into the edge of stability | Making loss landscapes trendy again.

Evaluating understanding in LLMs | It's surprisingly hard.

When do adaptive optimizers fail to generalize? | A case study

Do we know why deep learning generalizes yet? | It depends on your epistemology.

The Importance of Making Mistakes in RL | A tale of two NeurIPS papers

How to Win Scholarships and Influence Donors | All of my thoughts on scholarships in one place

Auxiliary tasks in RL | Maybe the real reward was the representation we learned along the way.

Power to the people, but how? | From muffins to marathons

A Bayesian Perspective on Training Speed and Model Selection | As presented at NeurIPS 2020

Why I'm Learning Biology | And why yoou should too

Causality, Generalization, and Reinforcement Learning | Oh my!

In praise of the logarithm | Why are logarithms so magical?

Causality | If causation isn't correlation, then what is it?

What makes the distributional perspective on RL different? | Stuff I did in 2018

Probabilistic Graphical Models | Why I shouldn't quit my PhD to go into the coffee business.

Reflections on my first month in grad school | DPhil life

A semi-coherent summary of set theory pt 2 | Or how set theorists are bad at naming things

A semi-coherent summary of set theory | Or why math really really needs rules

Week 1 in Oxford | I'm in Oxford!

School starts, as do the cool proofs | Infinity is still really cool.

What Work is Worth | Or, how getting grossed out by blood was possibly the most profitable personality trait I ended up with.

First Week at Microsoft | Or how I installed more microsoft products in two days than I had in the last two years.

On Entering the Real World | Or, on getting a job at a big tech company.

A gentle introduction to infinities | Infinity is bigger than you think.

A round-up of my favourite proofs from introductory math courses | I spent a lot of time proving things this year.