Part I: Integration of deep learning and neuroscience

Much lip service is paid to the “integration of deep learning and neuroscience”, with the goal of creating a feedback loop—deep learning for neuroscience and neuroscience for deep learning. This turns out to be hard to do in practice. The thorough and well-referenced paper Towards an Integration of Deep Learning and Neuroscience proposes primarily experiments that are not possible given current technological limitations or experiments whose potential impact on our understanding is unclear. In fact, my own PhD research may fall into the latter category. Of course I can motivate the work with vague notions of wanting to describe neural computations in terms of artificial neural networks and fostering a feedback loop between neuroscience and deep learning, but the more specific Why remains elusive. It’s not clear to me what we will actually learn or how it might fit into a larger scientific enterprise that will hopefully lead to an improved understanding of intelligent systems. This realization that I was unable to situate and justify my own research is what led me to read more philosophy of neuroscience, computation and explanation and ultimately to write this blog post.

I think the confusion when trying to work at this intersection comes in large part from lack of agreement about what progress towards a common goal would look like. This topic came up at the inaugural Cognitive Computational Neuroscience (CCN) conference last year, which was assembled to unify the “disconnected communities of cognitive science, artificial intelligence, and neuroscience” towards the common goal of “understanding the computational principles that underlie complex behavior”. Jim DiCarlo, chairing a panel discussion, asked, “when people say they want to work together, usually there is some idea of a shared goal…some idea of what success would even look like…are we even after the same thing?” This question received a number of very different answers from the panel, demonstrating the challenge of even agreeing on a common goal. Panelist Yann LeCun stated the common goal to “explain intelligence” but this doesn’t answer the question because we disagree about what is important to explain intelligence. LeCun wants to replicate animal intelligence in artificial systems. On the other hand, for neuroscientist Jackie Gottlieb, “success means characterizing a system at a particular level of abstraction … in a way that is reproducible and solid.” She seemed to view the relationship between neuroscience and machine learning as an exchange of pieces of evidence rather than working towards a common goal. Cognitive scientist Josh Tenenbaum, stressed the importance of distinguishing between goals on different time scales and suggested that all the CCN attendees probably share some long term vision of success, even if they disagree about what to do to work towards that goal in the short term. I think that coming up with good answers to these questions is the most important hurdle to overcome right now. An integration of cognitive science, artificial intelligence and neuroscience will not be possible until we are able to motivate our research by reference to a shared definition of what it means to make progress towards the goal of understanding intelligence.

CCN Panel

The same debate is happening in machine learning right now. The quest for “interpretable” AI is ultimately asking what explanations of AI systems will we accept? Are some systems more explainable than others? For example, are systems that are designed specifically to expose ‘disentangled’ representations more interpretable? Several events were dedicated to related topics at the Neural Information Processing Systems conference in 2017 (Interpretable ML Symposium, Learning Disentangled Representations: from Perception to Control). I think it is no coincidence that machine learning and neuroscience are both having these conversations now. Rather, it’s precisely because artificial systems are looking more and more like biological ones and our models of biological intelligence are looking increasingly like AI that we are forced to question our standard conceptions of what makes a good explanation.

These questions are ultimately in the realm of epistemology and philosophy of science, yet I have not observed many philosophical theories invoked in the discussion. I’ve realized that philosophers actually have a lot to say about computational explanation in neuroscience and cognitive science. As scientists, instead of reinventing the wheel, we would do well to look to our philosopher colleagues to help us wade through these difficult but crucially important questions about what constitutes an explanation. At the very least, I think our discussion would be simplified if we borrowed from the established language of philosophy of explanation. But I will make a stronger claim that what is needed is actually to create a new theory of explanation that applies equally to biological intelligence and artificial intelligence. The way we understand AI systems (the methods we use to study them, the nature of explanations that we accept) are very different from the way we traditionally understand and study biological systems. At present, this constitutes a challenge to the CCN goal, but in the long term, I see this as an opportunity to define a new science of intelligence that includes both artificial and biological intelligence. My central claim is that to achieve an integration of deep learning and neuroscience, we must reconcile their different theories of explanation. What I want to work towards is not deep learning applied to neuroscience or neuroscience applied to deep learning, but deep learning research that is neuroscience research.

In the posts that follow, I will review dominant theories of explanation in neuroscience, cognitive science and AI. As part of this, I will revisit what neuroscientists and AI researchers have written about these questions and try to situate their views within the existing philosophical frameworks of explanation.