Whenever we talk about intelligent behavior of organisms, it arises from a mapping of their sensory inputs to suitable actions. However, performing this mapping is extremely hard. On the one hand, the sensory input is very high dimensional – for example, one human eye alone has over 100 million sensory receptors. On the other hand, one has to choose the right action to achieve complex goals given only uncertain estimates of the environment. Drawing from previous robotics research, we propose a computational principle to perform this mapping by extracting the task relevant information from the sensory input and generating suitable actions in an integrated manner.
The computational principle consists of three building blocks: recursive estimators, interconnections, and differentiable programming. Recursive estimators are the tool to extract information about the state of the world, integrating information from observations and taken actions. We interconnect multiple of these recursive estimators to extract more complex information while also making the perception more robust, comparing new observations to our current belief over all connected states. We use this robust system for perception to also generate reasonable actions using differentiable programming: It allows us to automatically determine how the world belief might change given different actions, thereby enabling us to come up with suitable actions to achieve our goals.
Since we expect that this computational principle could explain a wide variety of intelligent behaviors, we propose it as a more general principle of intelligence. In this project, we therefore study it in a variety of different intelligent behaviors in collaboration with the interdisciplinary research teams at Science of Intelligence.