Learning and inference
A central problem in the practice of science, and in the day-to-day functioning of biological organisms, is deciding among competing explanations of data containing random errors. We have worked on well-founded methods for trading off complexity and accuracy of explanations in the framework of Bayesian statistics and the Minimum Description Length principle. We are currently interested in applying ideas in statistical learning to integrate our understanding of learning and adaptation across levels of biological organization. For example, we are developing theory to understand how the tetracycline adaptation network in E. coli and and the perceptual learning and decision-making pathways of cortex should react to changing environments. We are especially interested in three questions:
- How is optimal representation of the environment in a sensory network affected by the goals of a secondary decision layer that integrates sensory evidence;
- In Bayesian learning, how do prior expectations interact with accumulating data to determine the optimal learning trajectory for a network or decision element describing features of a statistical environment;
- In Bayesian learning when should learning from examples should be sudden (switching behavior) as opposed to gradual?