In systems neuroscience, we like to say that the goal of the visual system is to extract as much information about the world as possible. We start in the retina with points of light; those points are correlated (look around you: the color of one part of the visual world is often very similar to the color right next to it). So then the next set of neurons represent sudden changes in the brightness (ON/OFF neurons) to decorrelate. In the first stage of visual cortex, we find neurons that respond to edges – areas where you could put a several ON/OFF receptive fields in a row (see above). The responses of the visual neurons get successively more unrelated to each other as you go deeper into cortex – they start representing more abstract shapes and then, say, individual faces. But our guiding principle through this all is that the visual neurons are trying to present as much information about the visual world as possible.
But now let’s look at the nervous system from a broader view. What is it trying to accomplish? If we were economists, we might say that the nervous system is trying to maximize the ‘utility’ of the animal; an ecologist might say that it is trying to maximize the reproductive success of an animal (or: of an animal’s offspring, or its genes).
Is this a reasonable view of the ‘goal’ of the nervous system? If so, where do the goals of the input and the output meet? When do neurons in the visual system of the animal begin representing value, or utility, at some level? Is there some principle from computer science that has something to say about value and sensory representation?