Every blog post ever

Can you train a neural network forever?

Understanding trainability in neural networks

Deep dive into the edge of stability

Making loss landscapes trendy again.

Evaluating understanding in LLMs

It's surprisingly hard.

When do adaptive optimizers fail to generalize?

A case study

Do we know why deep learning generalizes yet?

It depends on your epistemology.

The Importance of Making Mistakes in RL

A tale of two NeurIPS papers

How to Win Scholarships and Influence Donors

All of my thoughts on scholarships in one place

Auxiliary tasks in RL

Maybe the real reward was the representation we learned along the way.

Power to the people, but how?

From muffins to marathons

A Bayesian Perspective on Training Speed and Model Selection

As presented at NeurIPS 2020

Why I'm Learning Biology

And why yoou should too

Causality, Generalization, and Reinforcement Learning

Oh my!

In praise of the logarithm

Why are logarithms so magical?


If causation isn't correlation, then what is it?

What makes the distributional perspective on RL different?

Stuff I did in 2018

Probabilistic Graphical Models

Why I shouldn't quit my PhD to go into the coffee business.

Reflections on my first month in grad school

DPhil life

A semi-coherent summary of set theory pt 2

Or how set theorists are bad at naming things

A semi-coherent summary of set theory

Or why math really really needs rules

Week 1 in Oxford

I'm in Oxford!

School starts, as do the cool proofs

Infinity is still really cool.

What Work is Worth

Or, how getting grossed out by blood was possibly the most profitable personality trait I ended up with.

First Week at Microsoft

Or how I installed more microsoft products in two days than I had in the last two years.

On Entering the Real World

Or, on getting a job at a big tech company.

A gentle introduction to infinities

Infinity is bigger than you think.

A round-up of my favourite proofs from introductory math courses

I spent a lot of time proving things this year.