Yeah, but what has ML ever done for neuroscience?

This question has been going round the neurotwitters over the past day or so.

Let’s limit ourselves to ideas that came from machine learning that have had an influence on neural implementation in the brain. Physics doesn’t count!

  • Reinforcement learning is always my go-to though we have to remember the initial connection from neuroscience! In Sutton and Barto 1990, they explicitly note that “The TD model was originally developed as a neuron like unit for use in adaptive networks”. There is also the obvious connection the the Rescorla-Wagner model of Pavlovian conditioning. But the work to show dopamine as prediction error is too strong to ignore.
  • ICA is another great example. Tony Bell was specifically thinking about how neurons represent the world when he developed the Infomax-based ICA algorithm (according to a story from Terry Sejnowski). This obviously is the canonical example of V1 receptive field construction
    • Conversely, I personally would not count sparse coding. Although developed as another way of thinking about V1 receptive fields, it was not – to my knowledge – an outgrowth of an idea from ML.
  • Something about Deep Learning for hierarchical sensory representations, though I am not yet clear on what the principal is that we have learned. Progressive decorrelation through hierarchical representations has long been the canonical view of sensory and systems neuroscience. Just see the preceding paragraph! But can we say something has flowed back from ML/DL? From Yemins and DiCarlo (and others), can we say that maximizing the output layer is sufficient to get similar decorrelation as the nervous system?

And yet… what else? Bayes goes back to Helmholtz, in a way, and at least precedes “machine learning” as a field. Are there examples of the brain implementing…. an HMM? t-SNE? SVMs? Discriminant analysis (okay, maybe this is another example)?

My money is on ideas from Deep Learning filtering back into neuroscience – dropout and LSTMs and so on – but I am not convinced they have made a major impact yet.

Studying the brain at the mesoscale

It i snot entirely clear that we are going about studying the brain in the right way. Zachary Mainen, Michael Häusser and Alexandre Pouget have an alternative to our current focus on (relatively) small groups of researchers focusing on their own idiosyncratic questions:

We propose an alternative strategy: grass-roots collaborations involving researchers who may be distributed around the globe, but who are already working on the same problems. Such self-motivated groups could start small and expand gradually over time. But they would essentially be built from the ground up, with those involved encouraged to follow their own shared interests rather than responding to the strictures of funding sources or external directives…

Some sceptics point to the teething problems of existing brain initiatives as evidence that neuroscience lacks well-defined objectives, unlike high-energy physics, mathematics, astronomy or genetics. In our view, brain science, especially systems neuroscience (which tries to link the activity of sets of neurons to behaviour) does not want for bold, concrete goals. Yet large-scale initiatives have tended to set objectives that are too vague and not realistic, even on a ten-year timescale.

Here are the concrete steps they suggest in order to from a successful ‘mesoscale’ project:

  1. Focus on a single brain function.
  2. Combine experimentalists and theorists.
  3. Standardize tools and methods.
  4. Share data.
  5. Assign credit in new ways.

Obviously, I am comfortable living on the internet a little more than the average person. But with the tools that are starting to proliferate for collaborations – Slack, github, and Skype being the most frequently used right now – there is really very little reason for collaborations to extend beyond neighboring labs.

The real difficulties are two-fold. First, you must actually meet your collaborators at some point! Generating new ideas for a collaboration rarely happens without the kind of spontaneous discussions that arise when physically meeting people. When communities are physically spread out or do not meet in a single location, this can happen less than you would want. If nothing else, this proposal seems like a call for attending more conferences!

Second is the ad-hoc way data is collected. Calls for standardized datasets have been around about as long as there has been science to collaborate on and it does not seem like the problem is being solved any time soon. And even when datasets have been standardized, the questions that they had been used for may be too specific to be of much utility to even closely-related researchers. This is why I left the realm of pure theory and became an experimentalist as well. Theorists are rarely able to convince experimentalists to take the time out of their experiments to test some wild new theory.

But these mesoscale projects really are the future. They are a way for scientists to be more than the sum of their parts, and to be part of an exciting community that is larger than one or two labs! Perhaps a solid step in this direction would be to utilize the tools that are available to initiate conversations within the community. Twitter does this a little, but where are the foraging Slack chats? Or amygdala, PFC, or evidence-accumulation communities?

Sophie Deneve and the efficient neural code

Neuroscientists have a schizophrenic view of how neurons. On the one hand, we say, neurons are ultra-efficient and are as precise as possible in their encoding of the world. On the other hand, neurons are pretty noisy, with the variability in their spiking increasing with the spike rate (Poisson spiking). In other words, there is information in the averaged firing rate – so long as you can look at enough spikes. One might say that this is a very foolish way to construct a good code to convey information, and yet if you look at the data that’s where we are*.

Sophie Deneve visited Princeton a month or so ago and gave a very insightful talk on how to reconcile these two viewpoints. Can a neural network be both precise and random?

Screen Shot 2016-04-23 at 11.06.22 AM Screen Shot 2016-04-23 at 11.06.27 AM

The first thing to think about is that it is really, really weird that the spiking is irregular. Why not have a simple, consistent rate code? After all, when spikes enter the dendritic tree, noise will naturally be filtered out causing spiking at the cell body to become regular. We could just keep this regularity; after all, the decoding error of any downstream neuron will be much lower than for the irregular, noisy code. This should make us suspicious: maybe we see Poisson noise because there is something more going on.

We can first consider any individual neuron as a noisy accumulator of information about its input. The fast excitation, and slow inhibition of an efficient code makes every neuron’s voltage look like a random walk across an internal landscape, as it painstakingly finds the times when excitation is more than inhibition in order to fire off its spike.

So think about a network of neurons receiving some signal. Each neuron of the network is getting this input, causing its membrane voltage to quake a bit up and a bit down, slowly increasing with time and (excitatory) input. Eventually, it fires. But if the whole network is coding, we don’t want anything else to fire. After all, the network has fired, it has done its job, signal transmitted. So not only does the spike send output to the next set of neurons but it also sends inhibition back into the network, suppressing all the other neurons from firing! And if that neuron didn’t fire, another one would have quickly taken its place.network coding

 

This simple network has exactly the properties that we want. If you look at any given neuron, it is firing in a random fashion. And yet, if you look across neurons their firing is extremely precise!

* Okay, the code is rarely actually Poisson. But a lot of the time it is close enough.

References

Denève, S., & Machens, C. (2016). Efficient codes and balanced networks Nature Neuroscience, 19 (3), 375-382 DOI: 10.1038/nn.4243

These are the Computational [and Systems] Neuroscience Blogs (updated)

I was recently asked which blogs deal with Computational Neuroscience. There aren’t a lot of them – most neuroscience blogs are very psych/cog focused because, honestly, that’s what the majority of the public cares about. Here are all of the ones that I know of (I am including Systems Neuro because it can be hard to disambiguate these things):

Interesting (Computational) Neuroscience Papers

Pillow Lab Blog

Memming

Anne Churchland

Bradley Voytek

xcorr

Quasiworking memory

Paxon Frady’s blog

Its Neuronal

Romaine Brett’s Blog

There is one other that I am blanking on and cannot find in my feedly right now. I will update later, and would welcome any suggestions!

 

Neuroscience podcasts

I have a long drive to work each day so I listen to a lot of podcasts. I have been enjoying the new Unsupervised Thinking podcast from some computational neuroscience graduate students at Columbia. So far they have discussed: Blue Brain, Brain-Computer Interfaces, and Neuromorphic Computing. Where else would you find that?

I also found out that I got a shout-out on the Data Skeptic podcast (episode: Neuroscience from a Data Scientist’s perspective).

Update: I should also mention that I quite like the Neurotalk podcast. The grad students (?) interview neuroscientists who come to give talks at Stanford. Serious stuff. Raw Data was also recommended to me as up-my-alley but I have not yet had a chance to listen to it. YMMV.

Ben Carson is not a neuroscientist

16635957336_eb48b84689_z

Photo by Gage Skidmore

As every neuroscientist can tell you, most people don’t understand the difference between neuroscientists, neurosurgeons, and neurologists.

Neurologist: A medical doctor who diagnoses and treats diseases of the nervous system

Neurosurgeon: A medical doctor who slices your brain up in order to heal it

Neuroscientist: A scientist who studies how the nervous systems of all animals work. Most work at a level so abstract it seems pointless (but it isn’t!)

What does this mean? A neurologist listens to your symptoms and will try to figure out what has gone wrong in your brain; a neuroscientist tries to understand how the brain and nervous system work down to the finest detail, no matter how useless-seeming that detail might be; a neurosurgeon specializes in performing very technically challenging surgical procedures to cure disorders of the nervous system.

Ben Carson, a neurosurgeon and presidential candidate, published a tweet showcasing what he knows about neuroscience:

…the brain can process two million bits of information per second. It remembers everything you’ve ever seen, everything you’ve ever heard…

This is, if not wrong, then just plain old made up.

Let’s break this down:

“the brain can process two million bits of information per second”

Now two million bits per second certainly sounds like a lot! 2 million bits is what you might know as 2 megabits or roughly 200 kilobytes. For comparison, here is a picture of a Corgi in a Mario costume that is a little more than 200KB:

super_mario_corgi_costume

Is that too much information for you? Does it blow your mind??

It is actually really hard to calculate how much information a nervous system is ‘processing’. In fact, I can only find one paper that even attempts to answer a small part of that question: how much does the eye tell the brain? By recording from neurons in the retina, these scientists were able to estimate that one retina will transmit ~800KB/sec. This may be a bit of an overstatement [see (1) below for discussion], but obviously – the visual system is transmitting a lot of information.

But your eye is not the only thing that transmits information to the brain! You have ears, you can touch, you can sense how hungry you are or how sick you feel. All the while you are making decisions and thinking about the past and the future. Your brain is computing a lot.

In other words, while it may seem at first like ‘the brain can process two million bits of information per second’ is an overstatement, it is actually an understatement. And probably by a lot. But more importantly: we don’t know, we don’t have any clue or guess, and I have no idea where Ben Carson pulled this number from. It is plain old made up.

“It remembers everything you’ve ever seen, everything you’ve ever heard”

This makes everyone sound a bit like Santa Claus! In reality, what we know points in the opposite direction. Although it is popular to describe the brain as a computer, it is not. From the moment of perception, the brain begins by filtering filtering filtering. Your eye receives a barrage of light – and much of this is filtered away. This image gets sent to the brain – and much of this is filtered away. Your mind does its best to infer what is occurring in the world – and in the process, much is filtered away or simply assumed. It is tragically easy to force someone to perceive something that is not there. In other words, right now you cannot remember everything that you saw two seconds ago! It is simply not available to your conscious mind.

But we can be a little generous – what about memories? Could we at least recall everything we consciously perceived? Everyone knows that is not true: who can remember being a baby? And even as adults we do not remember everything. Memories are not just photos to be peered at; they are a dense connection of associations. These associations can be activated together, but always in the context of whatever else going on in the brain (and this is not even getting into totally false memories).

This is a particular problem with eyewitness testimony. It is pretty well-known at this point that eyewitness memory is unreliable and prone to manipulation. Simply asking a witness to describe someone seems to modify the  memory – leaving the original gone forever.

The Inside Out view of memory as a discrete collection of little movies is wrong – though even in this movie they know that we can forget things forever! – and is based on an incorrect view of how the brain works. Memories are not crystalline balls ready to be sent up to consciousness at any moment, but a web of connections that can easily be rewired. Based on what we know about learning and memory, Ben Carson’s quote is wrong, and almost certainly made up.

From everything we know, Ben Carson is a phenomenal neurosurgeon. But Ben Carson is not a neuroscientist.

Addendum:

(1) How much information can the nervous system process? This is a really interesting question! It may seem straightforward, but this question actually has a lot of different interpretations. Let’s just take the example of our two eyes, each looking out onto the world. The eyes do not see totally different parts of the world, but an overlapping scene; just close each eye in succession and you will see much of the world the same.

In a sense, this means they are processing the same information about the world. Both can see the laptop (or phone/etc) in front of you and so much of what they see is redundant. If the left eye sees, say, 1MB of visual information, and the other does as well, does that mean the two eyes are processing 2MB of information? Or are they simply processing 1.25MB in parallel (the other 0.75 MB being the same thing in each eye – redundant overall)?

Behold, The Blue Brain

The Blue Brain project releases their first major paper today and boy, it’s a doozy. Including supplements, it’s over 100 pages long, including 40 figures and 6 tables. In order to properly understand everything in the paper, you have to go back and read a bunch of other papers they have released over their years that detail their methods. This is not a scientific paper: it’s a goddamn philosophical treatise on The Nature of Neural Reconstruction.

The Blue Brain Project – or should I say Henry Markram? it is hard to say where the two diverge – aims to simulate absolutely everything in a complete mammalian brain. Except right now it sits at middle-ground: other simulations have replicated more neurons (Izhikevich had a model with 10^11 neurons of 21 subtypes). At the other extreme, MCell has completely reconstructed everything about a single neuron – down to the diffusion of single atoms – in a way that Blue Brain does not.

The focus of Blue Brain right now is a certain level of simulation that derives from a particular mindset in neuroscience. You see, people in neuroscience work at all levels: from the individual molecules to flickering ion channels to single neurons up to networks and then whole brain regions. Markram came out of Bert Sakmann’s lab (where he discovered STDP) and has his eye on the ‘classical’ tradition that stretches back to Hodgkin and Huxley. He is measuring distributions of ion channels and spiking patterns and extending the basic Hodgkin-Huxley model into tinier compartments and ever more fractal branching patterns. In a sense, this is swimming against the headwinds of contemporary neuroscience. While plenty of people are still doing single-cell physiology, new tools that allow imaging of many neurons simultaneously in behaving animals have reshaped the direction of the field – and what we can understand about neural networks.

Some very deep questions arise here: is this enough? What will this tell us and what can it not tell us? What do we mean when we say we want to simulate the brain? How much is enough? We don’t really know – though the answer to the first question is assuredly no – and we assuredly don’t know enough to even begin to answer the second set of questions.

m-types

The function of the new paper is to collate in one place all of the data that they have been collecting – and it is a doozy. They report having recorded and labeled >14,000 (!!!!!) neurons from somatosensory cortex of P14 rats with full reconstruction of more than 1,000 of these neurons. That’s, uh, a lot. And they use a somewhat-convoluted terminology to describe all of these, throwing around terms like ‘m-type’ and ‘e-type’ and ‘me-type’ in order to classify the neurons. It’s something, I guess.

e-types

Since the neurons were taken from different animals at different times, they do a lot of inference to determine connectivity, ion channel conductance, etc. And that’s a big worry because – how many parameters are being fit here? How many channels are being missed? You get funny sentences in the paper like:

[We compared] in silico (edmodeled) PSPs with the corresponding in vitro (ed – measured in a slice prep) PSPs. The in silico PSPs were systematically lower (ed– our model was systematically different from the data). The results suggested that reported conductances are about 3-fold too low for excitatory connections, and 2-fold too low for inhibitory connections.

And this worries me a bit; are they not trusting their own measurements when it suits them? Perhaps someone who reads the paper more closely can clarify these points.

They then proceed to run these simulated neurons and perform ‘in silico experiments’. They first describe lowering the extracellular calcium level and finding that the network transitions from a regularly spiking state to a variable (asynchronous) state. And then they go, and do this experiment on biological neurons and get the same thing! That is a nice win for the model; they made a prediction and validated it.

On the other hand you get statements like the following:

We then used the virtual slice to explore the behavior of the microcircuitry for a wide range of tonic depolarization and Ca2+ levels. We found a spectrum of network states ranging from one extreme, where neuronal activity was largely synchronous, to another, where it was largely asynchronous. The spectrum was observed in virtual slices, constructed from all 35 individual instantiations of the microcircuit  and all seven instantiations of the average microcircuit.

In other words, it sounds like they might be able to find everything in their model.

But on the other hand…! They fix their virtual networks and ask: do we see specific changes in our network that experiments have seen in the past? And yes, generally they do. Are we allowed to wonder how many of these experiments and predictions did they do that did not pan out? It would have been great to see a full-blown failure to understand where the model still needs to be improved.

I don’t want to understand the sheer amount of work that was done here, or the wonderful collection of data that they now have available. The models that they make will be (already are?) available for anyone to download and this is going to be an invaluable resource. This is a major paper, and rightly so.

On the other hand – what did I learn from this paper? I’m not sure. The network wasn’t really doing anything, it just kind of…spiked. It wasn’t processing structured information like an animal’s brain would, it was just kind of sitting there, occasionally having an epileptic fit (note that at one point they do simulate thalamic input into the model, which I found to be quite interesting).

This project has metamorphosed into a bit of a social conundrum for the field. Clearly, people are fascinated – I had three different people send me this paper prior to its publication, and a lot of others were quite excited and wanted access to it right away. And the broader Blue Brain Project has had a somewhat unhappy political history. A lot of people – like me! – are strong believers in computation and modeling, and would really like it see it succeed. Yet what the chunk of neuroscience that they have bitten off, and the way they have gone about it, lead many to worry. The field had been holding its breath a bit to see what Blue Brain was going to release – and I think they will need to hold their breath a bit longer.

Reference

Markram et al (2015). Reconstruction and Simulation of Neocortical Microcircuitry Cell

(link)

The silent majority (of neurons)

Kelly Clancy has yet another fantastic article explaining a key idea in theoretical neuroscience (here is another):

Today we know that a large population of cortical neurons are “silent.” They spike surprisingly rarely, and some do not spike at all. Since researchers can only take very limited recordings from inside human brains (for example, from patients in preparation for brain surgery), they have estimated activity rates based on the brain’s glucose consumption. The human brain, which accounts for less than 2 percent of the body’s mass, uses 20 percent of its calorie budget, or three bananas worth of energy a day. That’s remarkably low, given that spikes require a lot of energy. Considering the energetic cost of a single spike and the number of neurons in the brain, the average neuron must spike less than once per second.4 Yet the cells typically recorded in human patients fire tens to hundreds of times per second, indicating a small minority of neurons eats up the bulk of energy allocated to the brain.

There are two extremes of neural coding: Perceptions might be represented through the activity of ensembles of neurons, or they might be encoded by single neurons. The first strategy, called the dense code, would result in a huge storage capacity: Given N neurons in the brain, it could encode 2Nitems—an astronomical figure far greater than the number of atoms in the universe, and more than one could experience in many lifetimes. But it would also require high activity rates and a prohibitive energy budget, because many neurons would need to be active at the same time. The second strategy—called the grandmother code because it implies the existence of a cell that only spikes for your grandmother—is much simpler. Every object in experience would be represented by a neuron in the same way each key on a keyboard represents a single letter. This scheme is spike-efficient because, since the vast majority of known objects are not involved in a given thought or experience, most neurons would be dormant most of the time. But the brain would only be able to represent as many concepts as it had neurons.

Theoretical neuroscientists struck on a beautiful compromise between these ideas in the late ’90s.6,7In this strategy, dubbed the sparse code, perceptions are encoded by the activity of several neurons at once, as with the dense code. But the sparse code puts a limit on how many neurons can be involved in coding a particular stimulus, similar to the grandmother code. It combines a large storage capacity with low activity levels and a conservative energy budget.

 

 

She goes on to discuss the sparse coding work of Bruno Olshausen, specifically this famous paper. This should always be read in the context of Bell & Sejnowski which shows the same thing with ICA. Why are the independent components and the sparse coding result the same? Bruno Olshausen has a manuscript explaining why this is the case, but the general reason is that both are just Hebbian learning!

She ends by asking, why are some neurons sparse and some so active? Perhaps these are two separate coding strategies? But they need not be: in order for codes to be sparse in general, it could require some few specific neurons to be highly active.

Is the idea that neurons perform ‘computations’ in any way meaningful?

I wrote this up two months ago and then forgot to post it. Since then, two different arguments about ‘computation’ have flared up on Twitter. For instance:

I figured that meant I should finally post it to help clarify some things. I will have more comments on the general question tomorrow.

Note that I am pasting twitter conversations into wordpress and hoping that it will convert it appropriately. If you read this via an RSS reader, it might be better to see the original page.

The word ‘computation’, when used to refer to neurons, has started to bother me. It often seems to be thrown out as a meaningless buzzword, as if using the word computation makes scientific research seem more technical and more interesting. Computation is interesting and important but most of the time it is used to mean ‘neurons do stuff’.

In The Future of the Brain (review here), Gary Marcus gives a nice encapsulation of what I am talking about:

“In my view progress has been stymied in part by an oft-repeated canard — that the brain is not “a computer” — and in part by too slavish a devotion to the one really good idea that neuroscience has had thus far, which is the idea that cascades of simple low level “feature detectors” tuned to perceptual properties, like difference in luminance and orientation, percolate upward in a hierarchy, toward progressively more abstract elements such as lines, letters, and faces.”

Which pretty much sums up how I feel: either brains aren’t computers, or they are computing stuff but let’s not really think about what we mean by computation too deeply, shall we?

So I asked about all this on twitter then I went to my Thanksgiving meal, forgot about it, and ended up getting a flood of discussion that I haven’t been able to digest until now:

(I will apologize to the participants for butchering this and reordering some things slightly for clarity. I hope I did not misrepresent anyone’s opinion.)

The question

Let’s first remember that the very term ‘computation’ is almost uselessly broad.

Neurons do compute stuff, but we don’t actually think about them like we do computers

Just because it ‘computes’, does that tell us anything worthwhile?

The idea helps distinguish them from properties of other cells

Perhaps we just mean a way of thinking about the problem

There are, after all, good examples in the literature of computation

We need to remember that there are plenty of journals that cover this: Neural Computation, Biological Cybernetics and PLoS Computational Biology.

I have always had a soft spot for this paper (how do we explain what computations a neuron is performing in the standard framework used in neuroscience?).

What do we mean when we say it?

Let’s be rigorous here: what should we mean?

A billiard ball can compute. A series of billiard balls can compute even better. But does “intent” matter?

Computation=information transformation

Alright, let’s be pragmatic here.

BAM!

Michael Hendricks hands me my next clickbait post on a silver platter.

Coming to a twitter/RSS feed near you in January 2015…

 

The bigger problem with throwing the word ‘computation’ around like margaritas at happy hour is it adds weight to

Cordelia Fine and Feminism in neuroscience

When I first started my PhD in neuroscience, a philosophically-inclined friend of mine started expounding on Feminist critiques of science. To most people, this would seem irrelevant to the science I was investigating: theory and modeling on a computer, before moving to hermaphroditic C. elegans. No females or males were being studied here! But the ideas are both insightful and important:

Emily Martin examines the metaphors used in science to support her claim that science reinforces socially constructed ideas about gender rather than objective views of nature. In her study about the fertilization process, for example, she asserts that classic metaphors of the strong dominant sperm racing to an idle egg are products of gendered stereotyping rather than portraying an objective truth about human fertilization. The notion that women are passive and men are active are socially constructed attributes of gender which, according to Martin, scientists have projected onto the events of fertilization and so obscuring the fact that eggs do play an active role.

Martin describes working with a team of sperm researchers at Johns Hopkins to illustrate how language in reproductive science adheres to social constructs of gender despite scientific evidence to the contrary: “even after having revealed…the egg to be a chemically active sperm catcher, even after discussing the egg’s role in tethering the sperm, the research team continued for another three years to describe the sperm’s role as actively penetrating the egg.”

Concepts are linked in our minds, consciously or not; the metaphors that we use matter (think of a Hopfield network). It would behoove all scientists to think deeply about Feminist critiques and their broader implications. The above example is canonical for a reason: the difference between two interacting agents (sperm, egg) with one decision-maker (sperm) is very different from that of two decision-makers (sperm and egg). But preconceived gender notions prevented us from noticing this simple fact!

Cordelia Fine is the most prominent scientist articulating these views in neuroscience today. This month, she has had two good interviews. If you take one big point away, it is that males and females may have different population means (though this interacts with social circumstances), but there is substantial population overlap. But humans like to see things in binary opposition so we either simply don’t recognize the amount of overlap that exists or blow up small differences.

One is with Uta Frith:

Cordelia: Happily, the perspectives are definitely not that polarized. One thing that’s worth stressing though is that criticisms of this area of research don’t stem from a belief that it’s intrinsically problematic to look at the effects of biological sex on the brain. But implicit assumptions about female/male differences in brain and behavior do influence research design and interpretation. They do this in ways that can give rise to misleading conclusions that additionally reinforce harmful gender stereotypes….

Cordelia: Yes, and long before the buzz about neuroplasticity, feminist neurobiologists were writing about this ‘entanglement’: the fact that the social phenomenon of gender (which systematically affects an individual’s psychological, physical, social and material experiences) is literally incorporated, shaping the brain and endocrine system. One of the recommendations of our article is for researchers to attempt to incorporate the principle of entanglement into their research models, including more and/or different categories of independent variables that include ways of capturing the role of the environment.

And with the Neurocritic at the PLoS Neuroscience community:

With regards to sample size, different implicit models of sex/gender and the brain will give rise to different intuitions or assumptions about what is an adequate sample size. According to implicit essentialist assumptions, there are there are distinctively different ‘male’ and ‘female’ brains. But non-human animal research has shown that biological sex interacts in complex ways with many different factors (hormones, stress, maternal care, and so on) to influence brain development. Because of the complexity and idiosyncrasy of these sex influences, this doesn’t give rise to distinctive female and male brains, but instead, heterogeneous mosaics of ‘female’ and ‘male’ (statistically defined) characteristics…

As for publication bias for positive findings, this has long been argued to be particularly acute when it comes to sex differences. It’s ubiquitous for the sex of participants to be collected and available, and the sexes may be routinely compared with only positive findings reported. As Anelis Kaiser and her colleagues have pointed out, this emphasis on differences over similarities is also institutionalized in databases, that only allow searches for sex/gender differences.