Tomorrow I will be presenting a poster at the Cognitive Computational Neuroscience (CCN) conference entitled “Towards a theory of explanation for biological and artificial intelligence”. Here is the poster along with the short paper that goes with it, as well as some slides that I used in a previous presentation.
All of these documents are incomplete, but the poster is a reasonable outline of the main ideas I’d like to think about collectively with the community. This work comes out of trying to situate my own research into a broader scientific enterprise and finding that to be more challenging than I expected. The more I read about philosophical theories of explanation, cognition and computation, the more I become aware that many of my intuitions about how this type of science progresses are hard to back up formally. I recognize that neuroscience is messy and exploratory, but ultimately I want to be doing science that is motivated by reference to a satisfactory theory of explanation (and theory of scientific progress). All of the theories of explanation that I’ve reviewed are incapable of accounting for explanation of intelligent phenomena in artificial and biological systems. This represents an obstacle to defining scientific progress in this domain. Without any roadmap, our science is just hunch following.
Other people have also been interested in these ideas recently. For example, the Unsupervised Thinking podcast is running a multi part series about explanation.
If you’re at CCN, please come chat at my poster (no. 21) during the first poster session tomorrow.