Wednesday, November 11, 2015

A (selective) history of Neuroscience in 10 minutes!

​I recently gave a talk in Oxford to the Computational Statistics & Machine Learning group (Thanks again to Jeremy Heng for the invitation to speak). I was given a 1 hour slot, which is longer than the normal 20-30 minutes that PhD students are given to present their work. Rather than drawing out my results over a longer period (which I would guess might have been quite boring for the audience) I decided to make my introduction longer to talk more generally about the field(s) I work in - Neuroscience and inference in differential equation models. I really enjoyed preparing this material and think it is something that PhD students should be given the opportunity to do more often. Here is what I chose to talk about.

Neuroscience and Artificial Intelligence: 1940-1975, 1990's & recent developments

This could easily take up the subject of a book (perhaps in 3 volumes!) What I said was highly selective (not least because of the gaps in the timeline) and heavily biased by the things I've seen and found interesting.  Here is a summary of what I said.

Research in the years between 1940 and 1975 laid the foundations for modern Neuroscience and Artificial Intelligence, and many of the key ideas in the field have their origins in this period.  It was also a time when there was a lot of crossover between experimental work and computational / mathematical work:



For example, in the 1940's McCulloch & Pitts developed what we now know as artificial neural networks.  For McCulloch and Pitts, this was a model of how the brain works - they saw neurons essentially as logic gates that were either on or off, and the brain as a series of logic gates that simply calculate weighted sums of their input.

In the 1950's, Hodgkin & Huxley developed a more realistic single-neuron model that linked ionic currents to voltage inside the cell (what we know now as electrophysiology), for which they won the Nobel Prize in 1963.  In my opinion it remains as one of the finest achievements in computational science because of its combination of a nonlinear differential equation model with experimental results, and because it has fundamentally changed the way people think about how neurons process information.  For me, it makes things more mysterious in a way - how does all this electrophysiology end up generating a conscious self that is able to make (reasonably) effective decisions!?  I think that is something that scientists don't have a very good answer to, yet.

Leaving aside those potentially unanswerable philosophical questions, there were also a whole host of practical questions that arose as a result of Hodgkin & Huxley's work.  One question that is quite important for the work that I do is how to relate electrophysiology with the macroscopic obsverations that are obtained from EEG and MRI recordings.  This is challenging because the brain contains around 10^10 neurons, and the Hodgkin-Huxley equations only describe the activity of a single neuron.  Coupling together 10^10 neurons in a single model is (currently) computationally intractable, so in the early 1970's Wilson & Cowan developed what is called a mean-field model which lumps together populations of neurons with similar properties (e.g. excitatory / inhibitory) and describes how the mean activity of those populations evolve over time.  These models are useful because they still give some insight into electrophysiology, but they can also be related to observations.

In the 1990's research became more specialised into sub-fields of neuroscience and artificial intelligence:


Here I am using Statistical Neuroscience mainly to refer to the community of people who do statistical analysis of brain imaging data.  And with Artifical Intelligence I am mainly thinking of the development of artificial neural networks to solve things like classification problems.

Recently, more overlap has developed between the different fields (and there is also a lot more total research activity):



My PhD work is mainly in the intersection of Computational Neuroscience and Statistical Neuroscience.  There is also interesting work going on in the intersection of Computational Neuroscience and Machine Learning (e.g. Demis Hassabis and Google DeepMind).  At UCL, the Gatsby Computational Neuroscience Unit recently co-located with the new Sainsbury Wellcome Centre for Neural Circuits, which is led by Nobel Prize winner John O'Keefe.  It will be exciting to see what happens in the intersection of the 3 circles in the future!

One slightly depressing development from the viewpoint of statisticians is that state-of-the-art neuroscience models tend to be quite computationally expensive and hence are very challenging to do inference for.  I think statisticians should be more vocal in arguing that it is better to have a simpler model that you can estimate parameters for, than a more complex model that you can't estimate the parameters of.