Skip navigation.
Home

Learning and inference

learning
(a) Model B is very close to the truth (t) at one point, but is mostly far away. Model A is more robust: it is close to truth for many parameter choices. With limited data, A is preferred over B, but with more data, B is favored. (b) A single model family comes close to the truth in two regions of the parameter space. With few data points the more robust region in model space will dominate the Bayesian posterior; as the amount of data increases the more accurate region will dominate. This is analogous to the “phase transitions” of statistical physics.

A central problem in the practice of science, and in the day-to-day functioning of biological organisms, is deciding among competing explanations of data containing random errors. We have worked on well-founded methods for trading off complexity and accuracy of explanations in the framework of Bayesian statistics and the Minimum Description Length principle. We are currently interested in applying ideas in statistical learning to integrate our understanding of learning and adaptation across levels of biological organization. For example, we are developing theory to understand how the tetracycline adaptation network in E. coli and and the perceptual learning and decision-making pathways of cortex should react to changing environments. We are especially interested in three questions:

  1. How is optimal representation of the environment in a sensory network affected by the goals of a secondary decision layer that integrates sensory evidence;
  2. In Bayesian learning, how do prior expectations interact with accumulating data to determine the optimal learning trajectory for a network or decision element describing features of a statistical environment;
  3. In Bayesian learning when should learning from examples should be sudden (switching behavior) as opposed to gradual?