Lecture by Gabriel Kreiman at Harvard University. My personal takeaway on auditing the presented content.
Course overview at https://klab.tch.harvard.edu/academia/classes/BAI/bai.html
Biological and Artificial Intelligence
Eventually, we will have ultraintelligent machines, that is machines that are more intelligent than humans.
Going back to 1950, The Turing test denotes the first test to understand whether a machine is indistinguishable from a human. However, the test is limited to language interaction. Turing tests can be adapted to other modes such as vision. The adapted Turing test for vision would be to ask any question on the image.
Intelligence is the greatest problem in science. Tomaso Poggio postulates that if we can understand the brain, we can understand intelligence. Consequently, we might become more intelligent ourselves and intractable problems might be resolved.
What can’t deep convolutional networks do?
An adversarial attack through adding noise immediately can change the predicted outcome of a deep convolutional network. Such a network cannot pass the visual Turing test because a human would not be fooled by this simple manipulation.
Action recognition often uses anything else in the image but the action to determine the outcome by using biases in the data collection processes. Again, the visual Turing test would fail where a human would succeed.
The most powerful computational devices on Earth
The human brain is still the most powerful device because it generalises and can solve unknown tasks.
The Biophysics of computation (Neuroscience 101)
A neuron has dendrites that collection information, an internal function in the cell that sums the dendrites information, an activation function that leans to an axon that outputs a value. Axons then are connected to other dendrites via synapses.
Studying animal is critical not only for behaviour but also to understand underlying mechanisms. David Hubel and Torsten Wiesel placed electrons in the back of the brain to find neurons that reacted to the reaction of orientation of objects. A lot of the computer scientists used their cartoons of brain functions (and others) to design neural networks for AI. Indeed, nearly every type of neural network has an analogue in biology.
Circuit diagrams that show the full connectivity in a cube micron of brain matter.
Listening to a concert of lots of neurons. We are now able to record from many neurons (ten thousands) simultaneously over prolonged periods of time.
Causally interfering with neural activity. We are able to turn on and off specific neurons through the iron channels with light triggers.
Together, today we have a better understanding of the connectivity of neurons, the activity of neurons and even the manipulation of neurons. In the long-run we may be able to understand biological intelligence.
Tangents to the topic
Consciousness is a matter of major debate. If a machine passes the Turing test, does it have a consciousness? Christof Koch argues that consciousness and intelligence are separate phenomena.
Humans already prescribe attachment to simple machines like a Tamagotchis. The Atlas robot from Boston Dynamics is trained by being pushed over. The human-like appearance makes it a question whether cruel behaviour towards machines is ethical?
The perils of AI are:
- Redistribution of jobs
- Unlikely terminator-like scenarios
- Military applications
- To err is algorithmic (just like humans)
- Biases in training data (note that humans have biases too or create them for the machine -> garbage in / garbage out)
- Lack of understanding (we still don’t understand how humans make decisions either)
- Social, mental, and political consequences of rapid changes in labour force
- Rapid growth, faster than development of regulations
But robots playing football are still years if not decades away from human-like behaviour.
Another point is to comprehend humour. Humour is based on higher level abstractions of content. Therefore, the systems require access to knowledge and make decisions about the contents shown to infer the humorous component (e.g. a picture of Abraham Lincoln with an iphone).
P.S.: Note from Lecture 2: Gabriel Kreiman believes that the next revolutions in machine learning will be based on something that we can learn from biological intelligence.