Do “I” exist?

I think therefore I am; or rather, I am currently thinking, therefore I currently am. But where does the “I” come from?

Much has been made of clinical cases where the self seems to malfunction spectacularly: like Cotard syndrome, whose victims believe they do not exist, even though they admit to having a life history; or “dissociative identity disorder,” where a single body seems to harbour multiple selves, each with its own name, memory, and voice. Most of us are not afflicted by such exotic disorders. When we are told that both science and philosophy have revealed the self to be more fragile and fragmentary than we thought, we take the news in our stride and go on with our lives…

The basic question about the self is: what, in essence, am I? Is my identity rooted in something physical (my body/brain) or something psychological (my memories/personality)? Normally, physical and mental go together, so we are not compelled to think of ourselves as primarily one or the other. But thought experiments can vex our intuitions about personal identity. In An Essay Concerning Human Understanding (1689), John Locke imagined a prince and a cobbler miraculously having their memories switched while they sleep: the prince is shocked to find himself waking up in the body of the cobbler, and the cobbler in the body of the prince. To Locke, it seemed clear the prince and the cobbler had in effect undergone a body swap, so psychological criteria must be paramount in personal identity.

What is critical to your identity, Dainton claims, has nothing to do with your psychological make-up. It is your stream of consciousness that matters, regardless of its contents. That’s what makes you you. As long as “your consciousness flows on without interruption, you will go on existing”

So as long as you don’t fall asleep, then?

Something else that caught my eye:

Yet even the humble roundworm C elegans, with its paltry 302 neurons and 2,462 synaptic connections (which scientists have exhaustively mapped), has a single neuron devoted to distinguishing its body from the rest of the world. “I think it’s fair to say that C elegans has a very primitive self-representation” comments the philosopher-neuroscientist Patricia Churchland—indeed, she adds, “a self.”

Now, I don’t know which neuron she is referring to so I can’t refer to the primary research. However, one strength of C. elegans is that it is so simplified it promotes very clear thinking about complex topics. Consider this: there must be multiple neurons whose activity is affected by the worm’s own internal state; and there are definitely multiple neurons devoted to getting sensory information in from the rest of the world. So does sensing external input + sensing internal state = sense of self? Or does it require intentional interrogation of the internal computations that are detecting internal state? Just because the information is there does not mean the ‘sense’ is there.

Monday thought/open question: What is the goal of the nervous system? (Updated)

In systems neuroscience, we like to say that the goal of the visual system is to extract as much information about the world as possible. We start in the retina with points of light; those points are correlated (look around you: the color of one part of the visual world is often very similar to the color right next to it). So then the next set of neurons represent sudden changes in the brightness (ON/OFF neurons) to decorrelate. In the first stage of visual cortex, we find neurons that respond to edges – areas where you could put a several ON/OFF receptive fields in a row (see above). The responses of the visual neurons get successively more unrelated to each other as you go deeper into cortex – they start representing more abstract shapes and then, say, individual faces. But our guiding principle through this all is that the visual neurons are trying to present as much information about the visual world as possible.

But now let’s look at the nervous system from a broader view. What is it trying to accomplish? If we were economists, we might say that the nervous system is trying to maximize the ‘utility’ of the animal; an ecologist might say that it is trying to maximize the reproductive success of an animal (or: of an animal’s offspring, or its genes).

Is this a reasonable view of the ‘goal’ of the nervous system? If so, where do the goals of the input and the output meet? When do neurons in the visual system of the animal begin representing value, or utility, at some level? Is there some principle from computer science that has something to say about value and sensory representation?

Update: There was a lot of discussion on twitter, which I have partially summarized here.