The replicability of science

  1. What is the importance of a single experiment to science? Very little.
  2. A massive study attempted to replicate 100 psychology experiments. 36% still had ‘significant’ p-values and 47.4% still had a similar effect size.
  3. replicability of science
  4. This is a description of one attempt at replication:

    “The study he finally settled on was originally run in Germany. It looked at how ‘global versus local processing influenced the way participants used priming information in their judgment of others.’… Reinhard had to figure out how to translate the social context, bringing a study that ran in Germany to students at the University of Virginia. For example, the original research used maps from Germany. “We decided to use maps of one of the states in the US, so it would be less weird for people in Virginia,” he said…Another factor: Americans’ perceptions of aggressive behavior are different from Germans’, and the study hinged on participants scoring their perceptions of aggression. The German researchers who ran the original study based it on some previous research that was done in America, but they changed the ratings scale because the Germans’ threshold for aggressive behavior was much higher…Now Reinhard had to change them back — just one of a number of variables that had to be manipulated.”

  5. In the simplest “hypothesis-test” experiment, everything is held constant except one key parameter (word color; monetary endowment; stereotype threat). In reality, this is never true. You arrive at a laboratory tired, bored, stressed, content.
  6. Experiments are meant to introduce major variations into already noisy data. The hope is that these variations are larger than anything else that occurs during the experiment. Are they?
  7. Even experiments that replicate quite often can turn out to be false
  8. Even experiments that fail to be replicated can turn out to have grains of truth
  9. The important regression is the likelihood of replication given n independent replications already in existence. If 66% of experiments fail to replicate when contained in only one publication, what percent fail when contained in two? Three? Science is a society, not the sum of individuals.
  10. Given time and resource constraints, what is the optimal number of experiments that you expect to replicate? What if it should be lower?
Advertisements

On Quantity of Information

“On Quantity of Information”

by Walter Pitts

Random remarks are traced by little boys
In wet cement; synapses in the brain
Die off; renewing uplift glyphs mountain
And valley in peneplane; the mouth rounds noise
To consonants in truisms: Thus expands law
Cankering the anoetic anonymous.
“If any love magic, he is most impious:
Him I cut off, who turn his world to straw,
Making him know Me.” So speaks the nomothete
Concealed in crystals, contracting myosin,
Imprisoning man by close-packing in his own kind.
We, therefore, exalt entropy and heat,
Fist-fight for room, trade place, momentum, spin,
Successful enough if life is undesigned.

From Nautilus

Is the idea that neurons perform ‘computations’ in any way meaningful?

I wrote this up two months ago and then forgot to post it. Since then, two different arguments about ‘computation’ have flared up on Twitter. For instance:

I figured that meant I should finally post it to help clarify some things. I will have more comments on the general question tomorrow.

Note that I am pasting twitter conversations into wordpress and hoping that it will convert it appropriately. If you read this via an RSS reader, it might be better to see the original page.

The word ‘computation’, when used to refer to neurons, has started to bother me. It often seems to be thrown out as a meaningless buzzword, as if using the word computation makes scientific research seem more technical and more interesting. Computation is interesting and important but most of the time it is used to mean ‘neurons do stuff’.

In The Future of the Brain (review here), Gary Marcus gives a nice encapsulation of what I am talking about:

“In my view progress has been stymied in part by an oft-repeated canard — that the brain is not “a computer” — and in part by too slavish a devotion to the one really good idea that neuroscience has had thus far, which is the idea that cascades of simple low level “feature detectors” tuned to perceptual properties, like difference in luminance and orientation, percolate upward in a hierarchy, toward progressively more abstract elements such as lines, letters, and faces.”

Which pretty much sums up how I feel: either brains aren’t computers, or they are computing stuff but let’s not really think about what we mean by computation too deeply, shall we?

So I asked about all this on twitter then I went to my Thanksgiving meal, forgot about it, and ended up getting a flood of discussion that I haven’t been able to digest until now:

(I will apologize to the participants for butchering this and reordering some things slightly for clarity. I hope I did not misrepresent anyone’s opinion.)

The question

Let’s first remember that the very term ‘computation’ is almost uselessly broad.

Neurons do compute stuff, but we don’t actually think about them like we do computers

Just because it ‘computes’, does that tell us anything worthwhile?

The idea helps distinguish them from properties of other cells

Perhaps we just mean a way of thinking about the problem

There are, after all, good examples in the literature of computation

We need to remember that there are plenty of journals that cover this: Neural Computation, Biological Cybernetics and PLoS Computational Biology.

I have always had a soft spot for this paper (how do we explain what computations a neuron is performing in the standard framework used in neuroscience?).

What do we mean when we say it?

Let’s be rigorous here: what should we mean?

A billiard ball can compute. A series of billiard balls can compute even better. But does “intent” matter?

Computation=information transformation

Alright, let’s be pragmatic here.

BAM!

Michael Hendricks hands me my next clickbait post on a silver platter.

Coming to a twitter/RSS feed near you in January 2015…

 

The bigger problem with throwing the word ‘computation’ around like margaritas at happy hour is it adds weight to

The brain-in-itself: Kant, Schopenhauer, cybernetics and neuroscience

Artem Kaznatcheev pointed me to this article on Kant, Schopenhauer, and cybernetics (emphasis added):

Kant introduced the concept of the thing-in-itself for that which will be left of a thing if we take away everything that we can learn about it through our sensations. Thus the thing-in- itself has only one property: to exist independently of the cognizant subject. This concept is essentially negative; Kant did not relate it to any kind or any part of human experience. This was done by Schopenhauer. To the question `what is the thing-in- itself?’ he gave a clear and precise answer: it is will. The more you think about this answer, the more it looks like a revelation. My will is something I know from within. It is part of my experience. Yet it is absolutely inaccessible to anybody except myself. Any external observer will know about myself whatever he can know through his sense organs. Even if he can read my thoughts and intentions — literally, by deciphering brain signals — he will not perceive my will. He can conclude about the existence of my will by analogy with his own. He can bend and crush my will through my body, he can kill it by killing me, but he cannot in any way perceive my will. And still my will exists. It is a thing-in- itself.

Let us examine the way in which we come to know anything about the world. It starts with sensations. Sensations are not things. They do not have reality as things. Their reality is that of an event, an action. Sensation is an interaction between the subject and the object, a physical phenomenon. Then the signals resulting from that interaction start their long path through the nervous system and the brain. The brain is tremendously complex system, created for a very narrow goal: to survive, to sustain the life of the individual creature, and to reproduce the species. It is for this purpose and from this angle that the brain processes information from sense organs and forms its representation of the world.

In neuroscience, what is the thing-in-itself when it comes to the brain? What is ‘the will’? Perhaps this is straining the analogy, but What do you have when you take away the sensory input and look at what directs movement and action? The rumbling, churning activity of the brain: the dynamics which are scaffolded by transcription of genes and experience with the environment. That which makes organisms more than a simple f(sensation) = action.

Then as neuroscience advances and we learn more about how the dynamics evolve, how genetic variation reacts to the environment – does the brain-in-itself become more constrained, more knowable? In a certain sense, ‘will’ is qualia; but in another it is that which feels uncaused but is in reality a consequence of our physical life. Will is not diminished by its predictability.

Just some thoughts from a snowy day before Thanksgiving. But Kant and Schopenhauer are worth thinking about…

An “interview” with Laboratory Life

Note: Reading Latour’s “Laboratory Life”, I found that there were too many great quotes to summarize. Having once been chastened by a High School teacher that “when there is a great writer, you should let them have their voice”, I thought that a Q&A with the book would be a suitable way to get the ideas across. All answers are quotes from the book in one way or another. This is not meant to be an introduction to Laboratory Life but rather a series of quotes that I found particularly interesting.

Q: You are a book about how facts – or should I say, “facts” – are constructed in science. In order to understand how that happens, you spent some time in Roger Guillemain’s neuroendocrinology laboratory at the Salk Institute. How would you describe – anthropologically – what you saw there?

Firstly, at the end of each day, technicians bring piles of documents from the bench space through to the office space. In a factory we might expect these to be reports of what has been processed and manufactured. For members of this laboratory, however, these documents constitute what is yet to be processed and manufactured. Secondly, secretaries post off papers from the laboratory at an average rate of one every ten days. However, far from being reports of what has been produced in the factory, members take these papers to be the product of their unusual factory.

By dividing the annual budget of the laboratory by the number of articles published (and at the same time discounting those articles in the laymen’s genre), our observer calculated that the cost of producing a paper was $60,000 in 1975 and $30,000 in 1976 (ed: approximately $260,000 and $123,000, respectively, in 2013 terms). Clearly, papers were an expensive commodity!

Moreover, nearly all the peptides (90 percent) are manufactured for internal consumption and are not available as output. The actual output (for example, 3.2 grams in 1976) is potentially worth $130,000 at market value, and although it cost only $30,000 to produce, samples are sent free of charge to outside researchers who have been able to convince one of the members of the laboratory that his or her research is of interest.

Q: Then it seems like if one were an outsider observing a laboratory, it was the papers that were important, not the experiments. How did the scientists react when they heard this?

Indeed, our observer incurred the considerable anger of members of the laboratory, who resented their representation as participants in some literary activity. In the first place, this failed to distinguish them from any other writers. Secondly, they felt that the important point was that they were writing about something, and that this something was “neuroendocrinology.” They claimed merely to be scientists discovering facts; [I] doggedly argued that they were writers and readers in the business of being convinced and convincing others.

Q: If the work of science is fundamentally literary, it must have a set of precursors – myths, legends, etc – that it draws on. Could you illustrate that somehow?

Neuroendocrinology seemed to have all the attributes of a mythology: it had had its precursors, its mythical founders, and its revolutions. In its simplest version, the mythology goes as follows: After World War II it was realised that nerve cells could also secrete hormones and that there is no nerve connection between brain and pituitary to bridge the gap between the central nervous system and the hormonal system. A competing perspective, designated the “hormonal feedback model” was roundly defeated after a long struggle by participants who are now regarded as veterans. As in many mythological versions of the scientific past, the struggle is now formulated in terms of a fight between abstract entities such as models and ideas. Consequently, present research appears based on one particular conceptual event, the explanation of which only merits scant elaboration by scientists. The following is a typical account: “In the 1950s there was a sudden crystallization of ideas, whereby a number of scattered and apparently unconnected results suddenly made sense and were intensely gathered and reviewed.”

However, the mythology of its development is very rarely mentioned in the course of the day-to-day activities of members of the laboratory. The beliefs that are central to the mythology are noncontroversial and taken for granted, and only enjoy discussion during the brief guided tours of the laboratory provided for visiting laymen. In the setting, it is difficult to determine whether the mythology is never alluded to simply because it is a remote and unimportant remnant of the past or because it is now a well-known and generally accepted item of folklore.

Q: Okay, but most scientists would say that they spend their time performing experiments in order to establish facts. 

Let us start with the concept of noise. Information is a relation of probability; the more a statement differs from what is expected, the more information it contains. It follows that a central question for any participant advocating a statement in the field is how many alternative statements are equally probable. If a large number can easily be thought of, the original statement will be taken as meaningless and hardly distinguishable from others. If the others seem much less likely than the original statement, the latter will stand out and be taken as a meaningful contribution.

The whole series of transformations, between the rats from which samples are initially extracted and the curve which finally apears in publication, involves an enormous quantity of sophisticated apparatus. By contrast with the expense and bulk of this apparatus, the end product is no more than a curve, a diagram, or a table of figures written on a frail sheet of paper. It is this document, however, which is scrutinised by participants for its “significance” and which is used as “evidence” in part of an argument or in an article. Thus, the main upshot of the prolonged series of transformations is a document which, as will become clear, is a crucial resource.

Instead of a “nice curve,” it is all too easy to obtain a chaotic scattering of random points of curves which cannot be replicated. Every curve is surrounded by a flow of disorder, and is only saved from dissolution because everything is written or routinised in such a way that a point cannot as well be in any place of the log paper. The investigator placed a premium on those effects which were recordable; the data were cleaned up so as to produce peaks which were clearly discernible from the background; and, finally, the resulting figures were used as sources of persuasion in an argument.

It was obvious to our observer, however, that everything taken as self-evident in the laboratory was likely to have been the subject of some dispute in earlier papers.

Q: Could you describe the types of facts?

Statements corresponding to a taken-for- granted fact were denoted type 5 statements. Precisely because they were taken for granted, our observer found that such statements rarely featured in discussions between laboratory members. The greater the ignorance of a newcomer, the deeper the informant was required to delve into layers of implicit knowledge, and the farther into the past. Beyond a certain point, persistent questioning by the newcomer about “things that everybody knew” was regarded as socially inept.

More commonly, type 4 statements formed part of the accepted knowledge disseminated through teaching texts. It is, by contrast with type 5 statements, made explicit. This type of statement is often taken as the prototype of scientific assertion.

Many type 3 statements were found in review discussions and are of the form, “A has a certain relationship with B.” By deleting modalities from type 3 statements it is possible to obtain type 4 statements. For instance, “Oxytocin is generally assumed to be produced by the neurosecretory cells of the paraventricular nuclei.”

Type 2 statements could be identified as containing modalities which draw attention to the generality of available evidence (or the lack of it). For example: “There is a large body of evidence to support the concept of a control of the pituitary by the brain.”

Type 1 statements comprise conjectures or speculations (about a relationship) which appear most commonly at the end of papers.

It would follow that changes in statement type would correspond to changes in fact-like status. For example, the deletion of modalities in a type 3 statement would leave a type 4 statement, whose facticity would be correspondingly enhanced.

Q: Okay, let’s take this metaphor seriously. If the purpose of a laboratory is to construct papers for the purpose of persuasion – why does a scientist do this?

It is true that a good deal of laboratory conversations included mention of the term credit. The observers’ notebooks reveal the almost daily reference to the distribution of credit. It was a commodity which could be exchanged. The beginning of a scientist’s career entails a series of decisions by which individuals gradually accumulate a stock of credentials. These credentials correspond to the evaluation by others of possible future investments in that scientist. The investments have an enormous payoff both because of a concentration of credit in the institute and because of a high demand for credible information in the field. In terms of his pursuit of reward, his career makes little sense; as an investor of credibility it has been very successful.

For example, a successful investment might mean that people phone him, his abstracts are accepted, others show interest in his work, he is believed more easily and listened to with greater attention, he is offered better positions, his assays work well, data flow more reliably and form a more credible picture. The objective of market activity is to extend and speed up the credibility cycle as a whole. Those unfamiliar with daily scientific activity will find this portrayal of scientific activity strange unless they realise that only rarely is information itself “bought.” Rather, the object of “purchase” is the scientist’s ability to produce some sort of information in the future. The relationship between scientists is more like that between small corporations than that between a grocer and his customer.

Another key feature of the hierarchy is the extent to which people are regarded as replaceable. When, for example, a participant talks about leaving the group, he often expresses concern about the fate of the antiseras, fractions, and samples for which he has been responsible. It is these, together with the papers he has produced, that represent the riches needed by a participant to enable him to settle elsewhere and write further papers. Since the value of information is thought to depend on its originality, the higher a participant in the hierarchy the less replaceable he is thought to be.

Q: Credit is important because it means that the science is deemed more credible. Talk about how this affects the perception of the science.

For instance, the standing of one scientist might be such that when he defines a problem as important, no one feels able to object that it is a trivial question; consequently, the field may be moulded around this important question, and funds will be readily forthcoming. One review specified fourteen criteria which had to be met before the existence of a new releasing factor could be accepted. These criteria were so stringent that only a few signals could be distinguished from the background noise. This, in turn, meant that most previous literature had to be dismissed. By increasing the material and intellectual requirements, the number of competitors was reduced. 

Whether or not the number and quality of inscriptions constituted a proof depended on negotiations between members. Let’s say that Wilson wants to know the basis for the claim that the peptides have no activity when injected intravenously, so that they can counter any possible objections to their argument. At first sight, a Popperian might be delighted by Flower’s response. It is clear, however, that the question does not simply hinge on the presence or absence of evidence. Rather Flower’s comment shows that it depends on what they choose to accept as negative evidence. For him, the issue is a practical question. This example demonstrates that the logic of deduction cannot be isolated from its sociological grounds.

Similarly, a colleague’s claim was dismissed by showing an almost perfect fit between CRF, an important and long sought-after releasing factor, and a piece of haemoglobin, a relatively trivial protein. The dismissal effect is heightened by the creation of a link between his recent claim and the well-known blunder which the same colleague had committed a few years earlier

They appear to have developed considerable skills in setting up devices which can pin down elusive figures, traces, or inscriptions in their craftwork, and in the art of persuasion. The latter skill enables them to convince others that what they do is important, that what they say is true, and that their proposals are worth funding. They are so skillful, indeed, that they manage to convince others not that they are being convinced but that they are simply following a consistent line of interpretation of available evidence.

Q: If you could summarize everything, how would you do it?

Our argument has one central feature: the set of statements considered too costly to modify constitute what is referred to as reality.

The result of the construction of a fact is that it appears unconstructed by anyone; the result of rhetorical persuasion in the agnostic field is that participants are convinced that they have not been convinced; the result of materialisation is that people can swear that material considerations are only minor components of the “thought process”; the result of the investments of credibility, is that participants can claim that economics and beliefs are in no way related to the solidity of science; as to the circumstances, they simply vanish from accounts, being better left to political analysis than to an appreciation of the hard and solid world of facts!

By being sufficiently convincing, people will stop raising objections altogether, and the statement will move toward a fact-like status. Instead of being a figment of one’s imagination (subjective), it will become a “real objective thing,” the existence of which is beyond doubt.

The first scientist…or natural philosopher

Nature has a review of a book on Aristotle:

Aristotle is considered by many to be the first scientist, although the term postdates him by more than two millennia. In Greece in the fourth century BC, he pioneered the techniques of logic, observation, inquiry and demonstration. These would shape Western philosophical and scientific culture through the Middle Ages and the early modern era, and would influence some aspects of the natural sciences even up to the eighteenth century…

Leroi, an evolutionary developmental biologist, visits the Greek island of Lesvos — where Aristotle made observations of natural phenomena and anatomical structures — and puts his own observations in dialogue with those of the philosopher. It was in the island’s lagoon of Kolpos Kalloni that Aristotle was struck by the anatomy of fish and molluscs, and started trying to account for the function of their parts. Leroi’s vivid descriptions of the elements that inspired Aristotle’s biological doctrines — places, colours, smells, marine landscapes and animals, and local lore — enjoin the reader to grasp them viscerally as well as intellectually.

But it is important to distinguish between natural philosophy and science. I have always thought Francis Bacon was the first scientist due to his, y’know, inventing much of what we consider scientific method. I don’t know the extent to which he codified existing ideas versus creating some sort of novel synthesis?

The history of the scientific method is of course a long gradient. Perhaps it began with another early innovator in scientific methodology was Ibn al-Haytham:

The prevailing wisdom at the time was that we saw what our eyes, themselves, illuminated. Supported by revered thinkers like Euclid and Ptolemy, emission theory stated that sight worked because our eyes emitted rays of light — like flashlights. But this didn’t make sense to Ibn al-Haytham. If light comes from our eyes, why, he wondered, is it painful to look at the sun? This simple realization catapulted him into researching the behavior and properties of light: optics…

But Ibn al-Haytham wasn’t satisfied with elucidating these theories only to himself, he wanted others to see what he had done. The years of solitary work culminated in his Book of Optics, which expounded just as much upon his methods as it did his actual ideas. Anyone who read the book would have instructions on how to repeat every single one of Ibn al-Haytham’s experiments.

Addiction and free will

Free Will

Bethany Brookshire, aka Scicurious, has an awesome article on how we think of addiction:

None of these views are wrong. But none of them are complete, either. Addiction is a disorder of reward, a disorder of learning. It has genetic, epigenetic and environmental influences. It is all of that and more. Addiction is a display of the brain’s astounding ability to change — a feature called plasticity  — and it showcases what we know and don’t yet know about how brains adapt to all that we throw at them.

…Addiction involves pleasure and pain, motivation and impulsivity. It has roots in genetics and in environment. Every addict is different, and there are many, many things that scientists do not yet know. But one thing is certain: The only overall explanation for addiction is that the brain is adapting to its environment. This plasticity takes place on many levels and impacts many behaviors, whether it is learning, reward or emotional processing.  If the question is how we should think of addiction, the answer is from every angle possible.

But this Aeon piece on addiction is still stuck on the mind-body problem:

In an AA meeting, such setbacks are often seen as an ego out of control, a lack of will. Yet research describes a powerful chemical inertia that can begin early in life. In 96.5 per cent of cases, addiction risk is tied to age; using a substance before the age of 21 is highly predictive of dependence because of the brain’s vulnerability during development. And childhood trauma drives substance use in adolescence. A study of 8,400 adults, published in 2006 in the Journal of Adolescent Health, found that enduring one of several adverse childhood experiences led to a two- to three-fold increase in the likelihood of drinking by age 14.

…Her multiple relapses, according to recent science, are no ethical or moral failing – no failure of will. Instead, they are the brain reigniting the neurological and chemical pathways of addiction.

Is will not the result of chemicals, or do we believe in souls again? Here is a recent interview with Daniel Dennett on neuroscience and free will:

Given that we now know — and can even perturb — some of the brain mechanisms of morality, and we see perhaps more clearly than ever that this is biological, what are the implications for blame, credit and free will to us, to everyday people?

First, it’s no news that your mind is your brain, and that every decision you make and every thought you have and every memory you recall is somehow lodged in your brain and involves brain activity. Up until now, we haven’t been able to say much more than that. Now, it’s getting to the point where we can. But it has almost no implications for morality and free will.

…Somebody wrote a book called ‘My Brain Made Me Do It,’ and I thought, ‘What an outrageous title! Unless it’s being ironic.’ Of course my brain made me do it! What would you want, your stomach to make you do it?

If you said, ‘My mind made me do it,’ then people would say, ‘Yes, right.’ In other words, you’re telling me you did this on purpose, you knew what you were doing. Well, if you do something on purpose and you know what you’re doing and you did it for reasons good, bad or indifferent, then your brain made you do it, of course. It doesn’t follow that you were not the author of that deed. Why? Because you are your embodied brain.

What should we be allowed to forget?

Should we be dampening the emotional aspect of memory?

Two decades ago, scientists began to wonder if they could weaken traumatic memories by suppressing the hormonal rush that accompanies their formation. They turned to propranolol, which was already on the market as a treatment for hypertension and blocks the activity of hormones like epinephrine and norepinephrine….Next, in 2002, neuroscientists reported that emergency room patients who took propranolol within 6 hours of a traumatic event were less likely to experience the heightened emotions and arousal associated with PTSD one month later, compared with people who took placebos.

The hitch was that in order to interfere with memory consolidation, propranolol needed to be given within hours of a trauma, long before doctors knew whether someone would go on to develop PTSD. But around the same time, studies began to show that memories can once again become fragile when they are recalled…Perhaps, researchers hypothesized, propranolol could weaken emotional memories if PTSD patients took the drug after they conjured up the details of a painful experience. By blocking the effects of norepinephrine and epinephrine upon recall, propranolol might dampen down activity in the amygdala and disrupt reconsolidation.

I liked this comment someone left on the article:

If the memory of my trauma were to be removed, I would make no sense to myself.

If we could edit our memories, what is important to who we are? Is there a threshold of pain beyond which we should not be forced to endure our entire lives? What was adaptive one hundred thousand years ago may not be adaptive in modern society.

Do “I” exist?

I think therefore I am; or rather, I am currently thinking, therefore I currently am. But where does the “I” come from?

Much has been made of clinical cases where the self seems to malfunction spectacularly: like Cotard syndrome, whose victims believe they do not exist, even though they admit to having a life history; or “dissociative identity disorder,” where a single body seems to harbour multiple selves, each with its own name, memory, and voice. Most of us are not afflicted by such exotic disorders. When we are told that both science and philosophy have revealed the self to be more fragile and fragmentary than we thought, we take the news in our stride and go on with our lives…

The basic question about the self is: what, in essence, am I? Is my identity rooted in something physical (my body/brain) or something psychological (my memories/personality)? Normally, physical and mental go together, so we are not compelled to think of ourselves as primarily one or the other. But thought experiments can vex our intuitions about personal identity. In An Essay Concerning Human Understanding (1689), John Locke imagined a prince and a cobbler miraculously having their memories switched while they sleep: the prince is shocked to find himself waking up in the body of the cobbler, and the cobbler in the body of the prince. To Locke, it seemed clear the prince and the cobbler had in effect undergone a body swap, so psychological criteria must be paramount in personal identity.

What is critical to your identity, Dainton claims, has nothing to do with your psychological make-up. It is your stream of consciousness that matters, regardless of its contents. That’s what makes you you. As long as “your consciousness flows on without interruption, you will go on existing”

So as long as you don’t fall asleep, then?

Something else that caught my eye:

Yet even the humble roundworm C elegans, with its paltry 302 neurons and 2,462 synaptic connections (which scientists have exhaustively mapped), has a single neuron devoted to distinguishing its body from the rest of the world. “I think it’s fair to say that C elegans has a very primitive self-representation” comments the philosopher-neuroscientist Patricia Churchland—indeed, she adds, “a self.”

Now, I don’t know which neuron she is referring to so I can’t refer to the primary research. However, one strength of C. elegans is that it is so simplified it promotes very clear thinking about complex topics. Consider this: there must be multiple neurons whose activity is affected by the worm’s own internal state; and there are definitely multiple neurons devoted to getting sensory information in from the rest of the world. So does sensing external input + sensing internal state = sense of self? Or does it require intentional interrogation of the internal computations that are detecting internal state? Just because the information is there does not mean the ‘sense’ is there.