The Miracle of Life
When I was a wide-eyed undergrad, I took my first (and only – sorry Prof Langer) systems course in my computer science curriculum. In that course, we built an ALU (Arithmetic Logic Unit – the part of your computer that does math) out of Boolean gates. We then gradually built up the complexity and levels of abstraction we were considering, going from logic gates up to assembly programming up to file systems, until I felt like I kind of understood how the high-level instructions I was typing into my interpreter were translated into the passage of electrical current through transistors in my CPU.
The wonderful thing about computer programming is that, at the fundamental level, we understand how the instructions we type translate into behaviour. When things don’t work the way you expect, it’s always theoretically possible to figure out why. We call the field ‘computer science’, but most disciplines are a world away from the messy experimental sciences like biology and chemistry, where the behaviour is often infuriatingly unpredictable.
Partly as a character-building exercise, and partly out of curiosity, I’ve decided to spend the next few months really digging into a real scientific field. My goal is to develop a logic-gate-to-GUI type understanding of how the chemical reactions in cells lead to system-level phenomena in humans. Part of my motivation for this is that I hope by understanding what the gaping holes and open problems in this area are, I’ll be able to guide my currently-nascent research agenda towards solving the types of problems in computer science that would produce tools that could help biologists fill these holes.
This will be the first in a series of cliffs-notes style posts on various aspects of biology. My plan is to start with the basics (cells, genes, signalling, and why we don’t live to be 5000 years old like some trees do), and then move on to summarizing some papers that have applied machine learning to chemical reaction networks and drug discovery.
Prelude: The Atom
I’m in the midst of reading The Making of the Atomic Bomb, a lighthearted romp depicting the final years in which humanity didn’t have the capacity to cause a mass extinction event at the press of a button. Crucial to both building atomic bombs and understanding the basic mechanisms of life is understanding the atom. The first thing you should know about the atom is that for such a tiny object, it’s shockingly confusing. And as Richard Rhodes describes, it befuddled several generations of scientists who naively tried to understand it before the scientific community finally accepted that sometimes weird things are just weird and you can’t try to explain away the weirdness.
We learn in school that the atom has a small, dense nucleus consisting of protons and neutrons, surrounded by shells of electrons spinning around it. This model is what most people picture when they think of atoms: one small ball surrounded by rings of electron orbits. If your chemistry teacher is any good, you also learn that, although photogenic, this model isn’t entirely accurate and electrons should rather be thought of as inhabiting ‘shells’. Except the electrons can’t be really thought of as being in any one place at a given time. And the shells have really weird, specific shapes.
So basically, in the 19th century scientists figured out that atoms have positive bits and negative bits, and sometimes the negative bits, which are much smaller than the positive bits, can get transferred from one atom to another. JJ Thompson suggested the ‘plum pudding model’ to describe this, where the electrons were basically floating in an atom soup. Ernest Rutherford, a hotshot New Zealander, started working with JJ at Cambridge and eventually, after holing up in Montreal (three cheers for McGill) and shooting helium ions at gold foil for a few years eventually realized that because of the way the positively charged helium ions were mostly able to pass right through the gold but occasionally bounced directly back at the emitter, realized that the positively charged part of the atom had to be super dense and somewhere in the middle, with the negative bit chilling on the outside. This didn’t really jibe with other things that people knew about positive and negative charges, since the electron should have either spiralled out or collapsed into the center of the atom, but the evidence was pretty clear. This was also back in the day when you could do cutting edge physics with a bit of gold foil and some radium, so people could see for themselves that you really could shoot positive particles straight through solid gold.
Rutherford was still stumped on how to make his nuclear model agree with basic mechanics when up-and-coming Danish physicist Niels Bohr swung by his lab in England in the early 20th century. After some thinking, Bohr (with quite a bit of nurturing by Rutherford) proposed that the behaviour physicists were observing in their experiments was consistent with an atom where electrons lived in a discrete set of permissible ‘shells’, and electrons moved up and down the shell hierarchy when they gained and lost energy. This model doesn’t exactly explain why electrons inhabit these shells (in reality they don’t – the shells proposed by Bohr’s model correspond to the majority of the mass of the electron’s wave function at a given energy level, by the same reason that multidimensional gaussians place most of their mass at a narrow radius around the origin), but it has sufficient predictive power that chemists still use it to describe chemical reactions.
Alright, with that basic understanding of how atoms work, we’re ready to start learning chemistry and biology. Stay tuned as we begin our biological journey.