Dr. Benjamin R. Cowley

Post-doc at Princeton University


new preprint:

"Bridging neuronal correlations and dimensionality reduction." Umakantha*, Morina*, Cowley*, Snyder, Smith, Yu. bioRxiv, 2020.
Link to paper: paper.pdf

newly published:

"High-contrast 'gaudy' images improve the training of deep neural network models of visual cortex". Cowley and Pillow. NeurIPS, 2020.
Link to paper: abstract.html

How do we train bigger models with less data?

Neuroscience is filled with models: models of behavior, models of neurons, and models of cognition. The bigger the model, the better we are at capturing the rich complexities of the brain. However, there's a catch: Bigger models take more data to train, and recording time is limited. This suggests a fundamental limit to how big we can make our models.

I am a computational neuroscience post-doctoral researcher in Jonathan Pillow's Lab at Princeton University. I study active learning, a subfield of machine learning, with a special focus on neuroscientific models. The idea is to combine data collection and model training in tandem: collect some data, train a model, choose stimuli for which the model is uncertain about, and use the resulting responses to these uncertain stimuli to better train the model. With active learning, we may have the best of both worlds: We can achieve large, highly-predictive models with a small amount of recording time.

current projects:

- active learning to build better encoding models of visual cortical neurons
(in collaboration with the Smith Lab at CMU)

- mapping visual input to decision output of a male fruit fly during courtship
(in collaboration with Adam Calhoun and the Murthy Lab at Princeton)