Monday Open Question: what do you need to do to get a neuroscience job? (Updated)

Awhile back I asked for help obtaining information on people who had gotten a faculty job in 2016 – 2017. And it worked! With a lot of help, I managed to piece together a list with more than 70 people who had gotten faculty jobs during this last year! I am sure it is incomplete (I keep getting new tips as of ten minutes ago…) but it is time to discuss some of the interesting features of the data.

First, the gender ratio: there are 44 men on the list to 33 women (57%). Over at the neurorumblr, 62% of the people on the Postdoc List were men which is roughly the same proportion.

To get more data, I focused on faculty hires who had a Google Scholar profile – it made it much easier to scrape data. It was suggested that people in National Academy of Sciences or HHMI labs may have a better chance of getting a faculty job. Out of the 51 people with a Google Scholar profile, 4 were in both NAS/HHMI labs, 8 were in HHMI-only labs, and 4 were in NAS-only labs.  Only one person who as in a HHMI/NAS lab in grad school went to a non-HHMI/NAS lab. People also suggested that a prestigious fellowship (HHWF, Damon Runyon, Life Sciences, etc.) It is hard to tell, but there didn’t seem like a huge number of these people gaining a job last year.

The model organisms they use are:

(15) Humans

(13) Mouse

(6) Rat

(4) Monkey

(3) Drosophila

(3) Pure computational

+ assorted others

Where are they all from? Here is the distribution of institutions the postdocs came from (update: though see the bottom of the post for more information):


In case you hadn’t noticed, this is a pretty geographically-concentrated pool of institutions. Just adding up schools that are in the NYC+ area (NYC-itself, plus Yale and Princeton), the Bay Area, Greater DC (Hopkins + Janelia), and ‘those Boston schools’. I’m not sure this accurately represents the geographic distribution of neuroscientists.

What about their publications? They had a mean H-index of 11.98 (standard deviation ~ 4.21).

We always hear that “you need a Cell/Nature/Science paper in order to get a job”. 29.4% (15/51) of this pool have a first- or second-author CNS paper. 68% (35/51) have a first- or second-author Nature Neuroscience/Neuron/Nature Methods paper. 78% (40/51) have some combination of these papers. It’s possible that faculty hires have CNS papers in the pipeline, but unless every single issue of CNS is dedicated to people who just got a faculty job this probably isn’t the big deal it’s always made out to be.

There’s a broader theory that I’ve heard from several people (outlined here) that the underlying requirement is really the cumulative impact factor. I have used the metric described in the link, where the approximate impact factor is taken from first-author publications and second-author publications are discounted 75% (reviews are ignored). Here are the CIFs for all 51 candidates over the past 7 years (red is the mean):

I thought there might be a difference by model organism, but within imaginary error bars it looks roughly the same:

In terms of absolute IF of the publications, there is a clear bump in the two years prior to the candidate getting their job (though note all of the peaks in individual traces prior to that):

So far as I can tell, there is no strong signal in terms of publications that you had as a Grad Student. Basically, graduate work or lab don’t matter, except as a conduit to get a postdoc position.

To sum up: you don’t need a CNS paper, though a Nature Neuroscience/Neuron/Nature Methods paper or two is going to help you quite a bit. Publish it in the year or two before you go on the job market.

Oh, and live in New York+ or the Bay Area.


Update: the previous city/institution analysis was done on a subset of individuals that had Google Scholar profiles. When I used all of the data, I got this list of institutions/cities:

Updated x2:

I thought it might be interesting to see which journals people commonly co-publish in. It turns out, eh, it is kind and it isn’t kind of. For all authors, here are the journals that they have jointly published in (where links represent the fact that someone has published in both journals):

And here are the journals they have published in as first authors:

How do you keep track of all your projects?

One of the central tasks that we must perform as scientists – especially as we progress in our careers – is project management. To that end, I’ll admit that I find myself a bit overwhelmed with my projects lately. I have many different things I’m working on with many different people, and every week I seem to lose track of one or another. So I’m looking for a better method! It seems to me that the optimal method to keep track of projects would have the following characteristics:

  1. Ping me every week about any project that I have not touched
  2. Re-assess each project every week, both in terms of what I need to do and the priority for the project as a whole
  3. Split the projects into subtypes: data gathering, analysis, tool building, writing (etc).
  4. Be clear in my weekly/monthly/longer-term goals. Review these every week
  5. Some kind of social pressure to keep you on-task

Right now I use a combination of Wunderlist, Evernote, Google Calendar and Brain Focus (keep track of how much time I spend on each task with a timer)… but when I get busy with one particular project I will become monofocused and tune out the rest. Optimally, there would be some way to ping myself that I really do need to work on other things, at least a little. And it is too easy to adapt to whatever pinging mechanism the App Of The Moment is using and start ignoring it. Is it possible to get an annoying assistant/social mechanism that keeps you on task with a random strategy to prevent adaptation? IFTTT?

I asked about this on Twitter and everyone has a strong opinion on the right way to do this, and every opinion is different. They tend to split into:

  1. Have people bother you constantly
  2. Slack (only works with buy in from others)
  3. Trello and Workflowy
  4. Something called GTD
  5. Put sticky notes everywhere
  6. Github
  7. Spreadsheets with extensive notes

I’m super curious if there is a better strategy for project management. Perhaps I am not using slack correctly? Suggestions?

Monday Open Question: Does anyone actually know what a ‘reflex’ is?

It has been a while since I have done one of these…

As I was working on a fellowship application last week, I realized that I did not know whether what I was studying would count as a ‘reflex’ or not. What was the definition?  Is something not a reflex simply when we have a hard time mapping the input to the output? The canonical reflex arc kind of gives us one definition, but a good definition will be general – applicable to both mammals and non-mammalian creatures (dragonflies, antlions, worms). I asked on twitter and got some unsatisfying answers.

Is everything that is fewer than n synapses a reflex? What if those connections are mediated by state in some way (say, a peptide) so that some mapping from sensation to action depends on hunger, on mood, on whatever else? Are the only actions that are not reflexes those that are not dictated by sensory input…somehow? Is this just a lazy way to beat up on invertebrate neuroscientists?

Monday Open Question: what are the current controversies in neuroscience?

Shamelessly stolen from Marina Picciotto, who asked on twitter: what are the current controversies in neuroscience? The easy and eternal answer seems to be, are we doing fMRI properly?

Here are some other suggestions:

aka, do mirror neurons exist and do they do what we think? Is the DSM useful? Are there real (biological) sex differences and should they be studied? Should we be using Bayesian statistics? What does sleep actually do? What is the role of parvalbumin and somatostatin-positive interneurons? What is the role of hippocampus?

How many smells can a smelly person smell?

Who cares about invertebrates if they don’t even have a cortex?

Who cares about cerebellum if it isn’t even cortex? Also, does cerebellar LTD mediate motor learning (TIL this is a controversy; I’m paying attention to the wrong cortex).

What does the ACC do? (This is important for cognition. Probably.)

What does LIP do? Is it involved in decision-making?

People can survive with basically no cortex and appear fine. So what does cortex actually do?

Is the brain Bayesian? Should we care about the Bayesian Brain hypothesis?

Is the brain actually noisy or is that all signal?

How should we mathematically model the brain, and behavior?

Should we use animal models of whole disorders or just specific symptoms?

Does PKMzeta actually have a role in memory (does it even have a real role in LTP)?

The Journal of Invited Dissent

“Why aren’t there comments on academic articles?” someone asked me over coffee (yes, I have exciting coffee conversations). “People should point out how silly a lot of this stuff is.”

I shrugged. “Politics,” I said. “Look at the head of any lab: they’ll rip apart a paper in their lab meetings, and then won’t say much in public. They need to keep a congenial public face because those other scientists will be reviewing their papers.”

The truth is there are comment sections on a lot of scientific articles, they are just barely used, or are used poorly (random rants, irrelevant commentary, etc.)

My companion suggested that what we really need is a journal offering critical commentary on other articles: and not just the bad, but the good as well. What does this really say? What is interesting or uninteresting?

This is the Journal of Invited Dissent. Would it work? Probably not: there is too much incentive to keep the veneer of bland congeniality in public. But there is an example of what it might look like (it was not what spurred the conversation above, but it is telling that the problem repeatedly pops up).

Bjorn Brembs has taken exception to an article published in Nature Neuroscience last year. He found the article to be overhyped and under-referenced (though still interesting and useful!). Although he wrote a letter to the editor at NN, they basically shrugged with comments such as “I agree that the article’s tone is a little more breathless than strictly required, but this is the style presently in vogue”.

So he posted the letter to the comments section at PubMed! Something you probably did not even know existed, and are likely to ignore even if you do know of it. And even better, the senior author on the paper publicly responded in the comments!

And these comments illustrate exactly why they are needed: they provide much-needed context outside of the ‘hype’ needed to publish in a high-profile journal. They shine light on the scientific crevices that those few of you who are not experts in motor learning might otherwise pass by.

Monday Open Question: The unsolved problems of neuroscience?

Over at NeuroSkeptic, there was a post asking “what are the unsolved problems of neuroscience”? For those interested in this type of questions, there are more such questions here and here. This, obviously, is catnip to me.

Modeled on Hilbert’s famous 23 problems in mathematics, the list comes from Ralph Adolphs and has questions such as “how do circuits of neurons compute?” and “how could we cure psychiatric and neurological diseases?” For me, I found the meta-questions most interesting:

Meta-question 1: What counts as understanding the brain?

Meta-question 2: How can a brain be built?

Meta-question 3: What are the different ways of understanding the brain?

But the difference between the lists from Hilbert and Adolphs is very important: Hilbert asked precise questions. The Adolphs questions often verge on extreme ambiguity.

Mathematics has an advantage over biology in its precision. We (often) know what we don’t know. Is neuroscience even at that point? Or would it be more fruitful to propose a systematic research plan?

Me, I would aim my specific questions at something more basic and precise than most of those on the list. For the sake of argument, here are a couple possible questions:

  • Does the brain compute Bayesian probabilities, and if so how? (Pouget says yes, Marcus says no?)
  • How many equations are needed to model any given process in the nervous system?
  • How many distinct forms of long-term potentiation/depression exist?

So open question time:

What (specific) open question do you think is most important?

or What are some particularly fruitful research programs? I am thinking in relation to the Langlands program here.

Open Question: Do we need a new Cosyne?

UCSD started one of the first (the first?) computational neuroscience departments. But when I started graduate school there, it was being folded into the general Neuroscience department; now it is just a specialization within the department. Why? Because we won. Because people who used to be computational neuroscientists are now just – neuroscientists. I could tell there was a change at UCSD when people trained in electrical engineering instead of biology didn’t even feel the need to join the specialization. What used to be a unique skill is becoming more and more common.

I have been thinking about this for the last few days after news trickled out about acceptances and rejections at Cosyne (note: I did not submit an abstract to the Cosyne main meeting.) The rejection rate this year was around 40%. Think about this for a minute: nearly half of the people who had wanted to present what they had been working on to their peers were not able to do so.

Now, people go to conferences for a wide variety of reasons. Some go to socialize, some to hear talks, some for a vacation. But the most important reason is to communicate your new research to your peers. And it’s a serious problem when half of the community just can’t do that.

Cosyne fills the very important role of bringing together the Computational and Systems fields of neuroscience (hence, CoSyNe). But when it was founded in 1996, this was not a big group of people. Perhaps the field has just gotten too big to accommodate everyone in one, medium sized conference; either the conference must grow or people need to flee to more specialized grounds – and repeat the process of growth and rebirth.

At dinner recently, I mentioned that it may be time for some smaller conferences to split off from Cosyne. Heads nodded in agreement; it’s not just me being contrary. There are other computational conferences – CNS, NIPS, SAND, RLDM. But none of them reside in the niche of Cosyne, none of them bring together experimentalists and theorists in the same way. The closest is RLDM which occupies a kind of intersection of Cosyne and Machine Learning. (edit: there is also CodeNeuro, though I don’t yet have a sense of the community there.)

We need more of that.

Monday open question: does fMRI activation have a consistent meaning?

Reports from fMRI rely, somewhat implicitly, on a rate-coding model of populations of neurons in the brain. More activity means more activation, and more activation usually means roughly the same thing. Useful, but misleading. How much should we rely on the interpretation that an area having similar activation in two different behaviors means the same thing? Neuroskeptic covers one such finding:

The authors are Choong-Wan Woo and colleagues of the University of Colorado, Boulder. Woo et al. say that, based on a new analysis of fMRI brain scanning data, they’ve found evidence inconsistent with the popular theory that the brain responds to the ‘pain’ of social rejection using the same circuitry that encodes physical pain. Rather, it seems that although the two kinds of pain do engage broadly the same areas, they do so in very different ways.

Roughly, the use a cool new statistical technique to measure activity in more oblique ways: combinations of activity have more meaning than they may have in the past.

The basic question here is: given that we know small regions can have multiple ‘cognitive’ meanings depending on the context of the entire network – or specifically which neurons in the region itself – are active, how much can we compare ‘activity’ signals between (or even within!) behaviors?

Obviously sometimes it will be entirely fine. Other times it won’t. Is there an obvious line?

Monday Open Question: What do neuroscientists know that the average person doesn’t?

Science helps us explain the world, often in ways that we wouldn’t necessarily be able to from every day experience. What is it that we know about ourselves and our nervous system that most people don’t realize?

Specifically, what aspect of behavior do neuroscientists take for granted that the average person doesn’t know?

My example: All perception is inference (not necessarily Bayesian).

I was reading a piece of philosophy by Bertrand Russell from ~100 years ago where he distinguished between sensory facts which were inferred and sensory facts that were directly sensed. I would propose that it is now noncontroversial (???) that all perception is some form of inference.

What else?

Monday open question: How long will it take to “solve” the brain?

Let’s do a quick calculation…

At the largest neuroscience conference, SfN, there are maybe 30,000 scientists who show up. Let’s pretend that this is about 1/3 of all neuroscientists (probably an underestimate) so we get 100,000 of us suckers.

Now let’s pretend we could assign each one of them a neuron that we wanted them to study. And let’s pretend that we were going to try to understand mice because, well, why not. There are ~71,000,000 neurons in the mouse brain according to Wikipedia.

This means that the mouse has about 700 neurons per neuroscientist.

There are ~10^11 synapses in the mouse brain, or about 1400 per neuron. That means there are roughly 980,000 synapses per neuroscientist.

Additionally, inside of each neuron is a whole bunch of molecular machinery that we don’t understand. Here is a simplified schematic of one of these pathways (dopamine):

molecular pathways

I have no idea how many of these pathways there are, nor how they interact. They’re kind of complicated.

Now let’s go up a step and remember that you can’t study a neuron in isolation because you have no idea what it’s inputs are or what it is outputting to. So now we need people to investigate sets of networks. And how those networks interact with each other. And how that interaction affects the physical world. And so on.

And all this is just for a mouse.

Whenever you hear, “but we’ve been studying [Alzheimers/Parkinsons/anything else] for thirty years!” remember what we’re dealing with.

The only way we can understand the mammalian brain without precisely measuring every single step of this is to find regularities and make theoretical models that can generalize from what we know to make predictions about other parts of the system. Otherwise, the hope of “understanding the brain” at all in our lifetimes is hopeless.