About Me

I recently completed my PhD at Université de Montréal which was supervised by Marc Schönwiesner, formerly of the International Laboratory for Brain, Music and Sound Research (BRAMS) (now at the Institute for Biology at Leipzig University) and Yoshua Bengio of the Quebec Institute for Learning Algorithms (Mila). Beginning Spring 2021, I’ll be a Postdoctoral Researcher in the Human Information Processing Lab at Oxford University.

I am also on the Senior Advisory Committee of Women in Machine Learning Inc., an organization that runs events and programs to support machine learning researchers and practitioners who identify primarily as women. In my free time, I like to read, knit, garden, hike, and listen to electronic music.

Research Statement (last updated 2018)

My current research interests lie at the intersection of neuroscience, deep learning and philosophy of explanation. I am interested in how neuroscience-inspired analysis methods can help to explain and describe the function of deep learning systems and how machine learning theory may impact our understanding of biologically intelligent systems. I’m particularly interested in how philosophy of mind and philosophy of explanation may help to formulate challenging problems related to the explainability of biological and artificial intelligence (AI).

My research trajectory has been winding but not random. My previous research topics have gradually helped me to narrow in on the specific intersection of artificial intelligence and neuroscience where I want to place my efforts. In this way, my previous research topics should not be taken as indicative of my current research interests. Rather, in this text, I hope to communicate the path that led me to my current focus on explanation in biological and AI.

My earliest contributions were in music information retrieval and music perception and cognition. I wanted to understand how the sounds of music could elicit such extreme experiences in listeners (chills, joy, tears) and how some aspects of musical experience could be extracted from various representations of music (audio waveforms, musical scores). During my masters degree, I fused these interests by exploring the use of machine learning and deep learning-based analysis methods to extract various forms of musical information (timbre, pitch, rhythm, genre, etc.) from neuroimaging measurements collected while subjects listened to music.

I started a PhD in computational cognitive neuroscience with the goal of studying the computations underlying auditory perception more broadly. I wanted to characterize the information processing used by the brain to transform sound waveforms into high level auditory experiences like speech and music. I was inspired by the work of researchers like Jim DiCarlo in the visual domain who used deep neural networks as a model of the visual system. I went to work with Elia Formisano in Maastricht, Netherlands, who is a leader in auditory encoding analysis applied to ultra-high field functional magnetic resonance imaging (fMRI). In this approach, candidate representational models (often expressed as a feature or transform of the sound waveform) are compared on their ability to predict brain activity in auditory brain regions using regularized regression. It seemed natural to me to replace these candidate representational models with candidate representation learning systems. In this way, hypotheses become less about specific representations and more about the type of architecture and training procedure that could elicit representations similar to those observed in our fMRI activity.

My interests shifted over the course of my PhD as I became more interested in core deep learning research, divorced from neuroscience applications. I was particularly inspired by efforts to interpret or explain the function of trained networks as well as research concerned with learning disentangled representations. I received a scholarship to work on a project with Nuance Communications, a company specializing in automatic speech recognition (ASR) and natural language processing, to characterize the intermediate layers of convnet-based acoustic models in ASR systems. Specifically, I characterized the language-specificity of each layer using a network “surgery” procedure in a transfer learning setting. Part of this work will be presented at the NeurIPS2018 workshop on Interpretability and Robustness in Audio, Speech and Language.

I adapted the remaining fMRI project of my PhD such that the networks that I would characterize for Nuance would be the same networks whose learned representations I would compare to fMRI responses to speech sounds throughout the auditory pathway. This gave me a playground of opportunities for my remaining PhD research as I now had measurements from artificial and human neural networks listening to the same sounds. My goal was to go beyond previous research showing that deep networks learn representations that are similar to neural representations by incorporating additional knowledge about both the artificial and biological systems to yield deeper insight from their comparison.

In trying to come up with new ways of comparing deep networks to brains and in thinking about the overlap of deep learning and neuroscience more generally, I came up against fundamental questions about the nature and progression of this scientific enterprise to which I was unable to find satisfactory answers. What would it mean to understand some intelligent capacity (like speech recognition) and how could I situate my approach in the context of this larger scientific goal? This led me to read texts in philosophy of science, particularly philosophy of statistics, philosophy of explanation and philosophy of mind, to get a better sense of what our termination criterion for explaining some intelligent capacity might look like. This changed my view of large swaths of neuroscience research. I found existing theories of explanation in neuroscience to be not up to the task of accounting for explanation in AI, and therefore not easily applicable to the current practice of comparing artificial and biological networks.

At the same time, I continued to be fascinated by the methods used to study deep networks, especially those that attempted to shed light on core questions in deep learning theory. Many of these methods involve procedures with analogs in experimental neuroscience (e.g. ablations) and I found that these experiments began to shape the way I thought about brains. I realized that what I am most interested in is not ‘neuroscience-inspired deep learning’ or ‘deep learning-inspired neuroscience’ but deep learning research that is neuroscience research. For example, one of my favourite papers from 2018, published at the International Conference for Learning Representations, is perhaps just as relevant to neuroscience as it is to machine learning. On the importance of single directions for generalization presents experiments concerning the role of single directions in activation space for generalization, but also questions the often-assumed relationship between a neuron’s selectivity and its functional role. In my poster at the Cognitive Computational Neuroscience conference, I proposed that, to the extent that an artificial system and a biological system demonstrate the same phenomenon, their explanations of that phenomenon should share the same form. One implication of this statement is that the more AI mimics biological systems, the more the scientific questions we ask of them become one and the same.

While my research trajectory has been varied, it has not been haphazard. I have always been and continue to be concerned with meta-scientific questions about how to best use statistical analysis and machine learning to study intelligence. I have gradually progressed from applied questions to more theoretical and abstract questions. There has been a consistent logic to the choices that led me from music perception to philosophy of explanation. My views on science have matured and I now appreciate the difficulty of doing good science in a way that I did not when I was exploring in my masters degree. I am now more concerned with doing work that is worthwhile and which can be justified with reference to a rigorous philosophical theory of scientific progress. I also appreciate the need for a diversity of scientific approaches and to resist the individualization of scientific research. Progress on these difficult questions will only be made in collaboration.

Looking forward, I see a niche that I am well-positioned to contribute to. There is a conceptual gap that plagues efforts to explain the function of neural networks. We need a new philosophical theory of explanation that would apply equally to artificial and biological intelligence. With such a framework, we can develop new ways to study artificial systems and formalize how they might help to better understand biological intelligent systems. I am not currently interested in working with biological neural data because I feel that there is so much theoretical and in silico work to do before I would be able to decide what experiment to run. In practice, I imagine exploring new methods to analyze and probe artificial systems, motivated by fundamental questions in deep learning theory and neuroscience, while simultaneously developing a philosophical framework to account for good explanations of intelligence. Long term, this may generate hypotheses about biological intelligent systems that can be tested through targeted experiments. I think this is how I can best utilize my diverse skills to serve the larger scientific goal of discovering the general principles that underlie biological and artificial intelligence.