Neuro 140/240 – Lecture 7

Lecture by Jan Drugowitsch at Harvard University. My personal takeaway on auditing the presented content.

Course overview at

Biological and Artificial Intelligence

A simple understanding would believe that a brain has a state has is changed via a function and input to produce a behaviour. However, the complexity of the brain makes the function intractable. It is more amenable to make smart hypotheses about how these functions could be structured as a proxy for the real behaviour.

Idea oberserver modelling

The main axiom is that information is uncertain. Typical approaches include boltzmann machines (stochastic Hopfield networks), Bayesian networks, statistical learning (support vector machines) and Variational Bayes and MCMC. Deep learning did not have uncertainty initially but more recent work does include uncertainty.

To understand the environment, the brain needs an understanding of uncertainty. If we can understand how the brain represents and uses uncertainty, we can improve AI algorithms.

Based on Bayesian decision theory, uncertainty is handled by having a prior on the state of the world P(sw) and an observation with sensory evidence P(es | sw ) provides a posterior P(sw | es ). The P are functions over multiple states.

A typical application in the brain is to combine uncertain evidence from multiple sources such as audio and visual information. Each source is providing an estimate of the value and an uncertainty estimate. The linear combination of estimates is usually sharper and is biased towards the more certain estimates.

Priors allow us to explain several optical illusions (Weiss, Simoncelli & Adelson, 2002). We seem to have a preference for slow speed priors meaning that the barber shop illusion is caused by our preference for the slower upward velocity in contrast to the sideway velocity.