NeuroRumblr, 2018 – 2019

Quick announcement –

I’ve refreshed the NeuroRumblr for the 2018 – 2019 job season. If you are a postdoc looking for an academic job, add yourself to The List so that search committees can reach out to you. Note that I refresh the list yearly, so if you have added yourself in the past you should fill out the form again for the new season. If you are on a faculty committee, please feel free to email me at to gain access to The List (both this year and last year’s). Every year, I have gotten requests for access from every kind of institution across the world. As an aside, if you have been on a search committee that has used the list in the past and have ideas on how to make it more useful, or just have other thoughts, I’d be curious to hear them.

There is a page for labs that are looking for postdocs.

There is a page for labs that are looking for research staff.

There is a page to keep track of neuroscience conferences.

There is a page with collections of advice on being an academic and looking for academic jobs.

There is a twitter account (@neurorumblr) that I occasionally use to make announcements. The account will now automatically tweet, multiple times a day, about jobs that were put on the rumblr the previous day, as well as with upcoming job or conference deadlines. If you are a PI who placed an ad under postdocs or research staff, you can now add your twitter handle and it will tag you when it tweets. If you tweet and tag @neurorumblr, I will usually retweet it – more free advertisement! The twitter account gets a lot of attention and I keep hearing from people who have looked for jobs that they paid close attention to it.

Another reminder that I am looking to identify neuroscientists hired as tenure-track faculty over the previous year. I already have a lot of people on the list! But I know that’s not everyone.

Happy hunting.


Thoughts on freedom of will

Some rough notes from a personal attempt to clarify my own thinking. Consider this a work-in-progress and probably wrong.

Principles underlying feeling of freedom of will (note that this is different from freedom of action or actual agency):

  1. Bayesian updating (distribution of decisions)
  2. State/neural dynamics are unpredictable in the future
  3. Accessibility of internal state
  4. Web of concepts
  5. Feedforward-ish network

What is it that makes someone believe themselves to be free? That they can exercise willpower in pursuit of moral agency?

There are a few concepts from neuroscience that we want to understand for the feeling of freedom in a deterministic world.

The first concept we have to understand is that the brain is a neural network. It has neurons connected to other neurons connected to other neurons. We like to imagine them as step-by-step programs that slowly process information and allow an organism to make a decision but in reality they are vastly interconnected. Still, there is some validity to levels of feedforward-ness! The important thing is that not every neuron has access to the information from every other neuron – and not every ‘layer’, or collection of neurons, has access to information from every other ‘layer’.

The second concept is that the brain is interested in the fundamental unpredictability of the future. There are informational constraints on knowledge about what the internal state of a system will be, in both the sense of the current dynamics and the broader sense of ‘hunger’, ‘thirst’, ‘sleepy’, etc. Each of these modulates the activity of different layers of the brain in a variety of ways, often through neuromodulatory pathways. Further, as a corollary of this and the first concept, different layers have imperfect access to the state of other neurons and other layers.

The third concept is the implementation of ideas and memories as a web of concepts. Smelling an odor can transport you back to the first time you smelled it, with all the associated feelings. No idea exists in isolation but accessing one will inevitably access a variety of others (to a greater or lesser extent). This is foundational to how memories are stored.

The fourth and final concept is the Bayesian Brain, the idea that brains represent probability distributions and compute in terms of inference. No 19th century symbolic thinking for us – we process information in a manner that fundamentally requires us to think of the distribution of possible options or possible futures.

How does this give rise to a feeling of freedom of will? Suppose you were considering whether you were going to perform some action or not. When considering which of various possibilities you will take, there is some set of neurons making this consideration. These neurons do not have access to all of the possible neurons involved in the decision. In other words, these neurons do not know which possible action they will take. Instead they must operate on the set of probabilities for the future state of the network. The operation on this distribution is the feeling of free will – that there is an internal state, or internal dynamics, that are inaccessible from the consideration. These neurons get input from the dynamics and can plausibly provide output (the feeling of willpower).

Imagining the future is similar. Will I have the possibility of choosing freely among many options? Yes, of course, because I am unclear what my future state will be. The probability distribution conditional on external and internal state is not unitary (though it can be with properly powerful external stimuli).

Could I choose to eat bacon and eggs this morning instead of my daily bowl of granola? Possibly; it exists in probability space of possible options. Will I do so? I can sit there and sample my internal state and have some uncertainty about whether I desire that or not. That makes me feel free: I will choose to do so or not do so, unconstrained by outside forces, even though it is entirely deterministic.

Fundamentally, the feeling of freedom is a reflection of probabilistic and uncertain thinking.

Please help me identify neuroscientists hired as tenure-track assistant profs in the 2017-18 faculty job season

Last year, I tried to crowd-source a complete list of everyone who got hired into a neuroscience faculty job over the previous year. I think the list has almost everyone who was hired in the US… let’s see if we can do better this year?

I posted an analysis of some of the results here – one of the key “surprises” was that no, you don’t actually need a Cell/Nature/Science paper to get a faculty job.

If you know who was hired to fill one or more of the listed N. American assistant professor positions in neuroscience or an allied field, please email me with this information (

To quote the requirements (stolen from Dynamic Ecology):

I only want information that’s been made publicly available, for instance via an official announcement on a departmental website, or by someone tweeting something like “I’ve accepted a TT job at Some College, I start Aug. 1!” If you want to pass on the information that you yourself have been hired into a faculty position, that’s fine too. All you’re doing is saving me from googling publicly-available information myself to figure out who was hired for which positions. Please do not contact me to pass on confidential information, in particular confidential information about hiring that has not yet been totally finalized.

Please do not contact me with nth-hand “information” you heard through the grapevine. Not even if you’re confident it’s reliable.

I’m interested in positions at all institutions of higher education, not just research universities. Even if the position is a pure teaching position with no research duties.

Learn by consuming the brains of your enemies

A few people have sent this my way and asked about it:

In a paper published Monday in the journal eNeuro, scientists at the University of California-Los Angeles reported that when they transferred molecules from the brain cells of trained snails to untrained snails, the animals behaved as if they remembered the trained snails’ experiences…

In experiments by Dr. Glanzman and colleagues, when these snails get a little electric shock, they briefly retract their frilly siphons, which they use for expelling waste. A snail that has been shocked before, however, retracts its siphon for much longer than a new snail recruit.

To understand what was happening in their snails, the researchers first extracted all the RNA from the brain cells of trained snails, and injected it into new snails. To their surprise, the new snails kept their siphons wrapped up much longer after a shock, almost as if they’d been trained.

Next, the researchers took the brain cells of trained snails and untrained snails and grew them in the lab. They bathed the untrained neurons in RNA from trained cells, then gave them a shock, and saw that they fired in the same way that trained neurons do. The memory of the trained cells appeared to have been transferred to the untrained ones.

The full paper is here.

Long and short of this is that there is a particular reflex (memory) that changes when they have experienced a lot of shocks. How memory is encoded is a bit debated but one strongly-supported mechanism (especially in these snails) is that there are changes in the amount of particular proteins that are expressed in some neurons. These proteins might make more of one channel or receptor that makes it more or less likely to respond to signals from other neurons. So for instance, when a snail receives its first shock a neuron responds and it withdraws its gills. Over time, each shock builds up more proteins that make the neuron respond more and more. These proteins are built up by the amount of RNA (the “blueprint” for the proteins, if you will) that are located in the vicinity of the neuron that can receive this information. There are a lot of sophisticated mechanisms that determine how and where these RNAs are built and then shipped off to the place in the neuron where they can be of the most use.

This new paper shows that in these snails, you can just dump the RNA on these neurons from someone else and the RNA has already encoded something about the type of protein it will produce. This is not going to work in most situations (I think?) so it is surprising and cool that it does here! But hopefully you can begin to see what is happening and how the memory is transferring. The RNA is now in the cell, it is now marked in a way that will lead it to produce some protein that will change how the cell responds to input, etc, etc.

One of the people who asked me about this asked specifically in relation to AI. Could this be used as a new method of training in Deep Networks somehow? The closest analogy I can think of is if you have two networks with the same architecture that have been trained in the same way (this is evolution). Then you train a little more, maybe on new stimuli or maybe on a new task, or maybe you are doing reinforcement learning and you have a network that predicts a different action-value pair. Now the analogy would be if you chose a few units (neurons) and directly copied the weights from the first network into the second network. Would this work? Would this be useful? I doubt it, but maybe? But see this interesting paper on knowledge distillation that was pointed to me by John O’Malia.

Controversial topics in Neuroscience

Twitter thread here.

  • Do mice use vision much?
    • They have pretty crappy eyesight and their primary mode of exploration seems to be olfactory/whisker-based
  • How much is mouse cortex like primate cortex?
    • Mouse cortex is claimed to be more multimodal than primate cortex which is more specialized
  • “The brain does deep learning”
    • Deep learning units aren’t exactly like neurons, plus we resent the hype that they have been getting
  • Is there postnatal neurogenesis? Is it real or behaviorally relevant?
    • See recent paper saying it doesn’t exist in humans
  • Brain imaging
    • Does what we see correlate with neural activity? Are we able to correct for multiple comparisons correctly? Does anyone actually correct for multiple comparisons correctly?
  • Bayesian brain
    • Do people actually use their priors? Does the brain represent distributions? etc
  • Konrad Kording
    • Can neuroscientists understand a microprocessor? Is reverse correlation irrelevant?
  • Do mice have a PFC
    • It’s so small!
  • STDP: does it actually exist?
    • Not clear that neurons in the brain actually use STDP – often looks like they don’t. Same with LTP/LTD!
  • How useful are connectomes
    • About as useful as a tangled ball of yarn?
  • LTP is the storage mechanism for memories
    • Maybe it’s all stored in the extracellular space, or the neural dynamics, or something else.
  • Are purely descriptive studies okay or should we always search for mechanism
    • Who cares about things that you can see?!


  • Does dopamine have ‘a role’?
    • Should we try to claim some unified goal for dopamine, or is it just a molecule with many different downstream effects depending on the particular situation?
  • Do oscillations (‘alpha waves’, ‘gamma waves’, etc) do anything?
    • Are they just epiphenomenon that are correlated with stuff? Or do they actually have a causative role?

What HASN’T Deep Learning replicated from the brain?

The brain represents the world in particular ways. Here are a few:

1. The visual world on the retina

The retina is thought to whiten images, or transform them so that they always have roughly the same average, maximum and minimum (so that you can see in very bright and very dark environments. This was originally shown very nicely in two papers from Atick and Redlich (1990, 1992). Essentially, you want to smooth the visual scene around each point depending on the noise. You get receptive fields that look something like this:

Or more generally this:

A denoising autoencoder – a network that tries to replicate a corrupted image which smooths locally – has neural representations that look similar:

2. The visual world in first order visual cortex

Similarly, if you want to efficiently represent the visual world (once it is denoised) you want to represent things sparsely or independently. This was shown by Olshausen and Field 1996  and Bell and Sejnowski 1997 and is equivalent to doing ICA on natural images. Note that doing PCA on natural images will give you Fourier components.

If you train a Deep Network on ImageNet (AlexNet example below), the filters on the first layer look similar:

3. The auditory world

The best representation of the auditory world is also efficiently encoded. Lewicki 2002 show that if you run ICA on acoustic data you get filters that look like nearly identical to the sounds neurons respond to (wavelet basis functions).

I have not seen a visualization of the first few layers of a neural network that classifies speech (for instance) but I would guarantee it has features that look like wavelets.

4. Spatial cells

Your sense of place in the world is encoded by a series of grid cells – which are a periodic representation of place – and place cells, which are precise locations in space. Dordek et al 2016 showed that non-negative PCA on place cells will give you grid cells. This is similar to the result that PCA on images gives you Fourier components. Note that Dordek et al also use a single-layer feedforward neural network and show that it has a similar property.

It turns out if you train a Deep recurrent network on network navigation task, you get grid cells (once you have assumed place cells).


What else is left? Olfaction is a mess and doesn’t have a coherent coding principle as far as I can tell (the olfactory space is not clearly defined). Mechanosensation (touch) has been hard to define but Zhao et al 2017 can find first-order touch receptive fields with an autoencoder (like with vision). You can get CPGs (oscillatory movement generators) with recurrent neural networks by training an input signal to be associated with a particular sequence of movements. I’m struggling to think of other internal representations that are well understood.

A long-term principle in neuroscience has been that successive layers of the brain are attempting to decorrelate their responses to produce ever-finer features. Tishby and Zaslavsky 2015 suggest that a similar principle applies to Deep Networks: you have a constrained input output and networks are trying to find the representations that encode the most information between input and output given the limited bandwidth that they have (numbers of layers, numbers of units). It should not be surprising that this entails something like different forms of PCA or ICA or other signal-detection framework.

One of the nice things about Deep Networks is that you do not have to explicitly code for this in order to find these features – they are costless in a way. You can train for a particular task – a visually-driven one, a path-driven one, an acoustic-driven one – and these features will just fall out. Not only will these features fall out, but neurons which are deeper in the pathway will also have similar activity. This is a much harder problem and one in which “run PCA again” or “run ICA again” will not give a good answer to.

What other neural representations have we not yet seen in neural networks?

Behold, the behaving brain!

In my opinion, THE most important shift in neuroscience over the past few years has been the focus on how behavior changes neural function across the whole brain. Even the sensory systems – supposedly passive passers-on of perfectly produced pictures of the world – will be shifted in unique ways by behavior. An animal walking will have different responses to visual stimuli than an animal that is just sitting around. Almost certainly, other behaviors will have other effects on the animal.

A pair of papers this week have made that point rather elegantly. First, Carsen Stringer and Marius Pachitariu from the Carandini/Harris labs have gobs of data from when they were recording ~10,000 neurons simultaneously. Marius Pachitariu has an excellent twitter thread explaining the work. I just want to take one particular point from this paper which is that you can explain a surprising amount of variance in the primary visual cortex – and all across the brain – simply by looking at the movement of the animal’s face.

In the figures below, they have taken movies of an animal’s face, extracted the motion energy (roughly, how much movement there is at that location in the video), and then used PCA to find the common ways that you can describe that movement. Using this kind of common motion, they then tried to predict the activity of individual neurons – while ignoring the traditional sensory or task information that you would normally be looking at.

The other paper is from Simon Musall and Matt Kaufman in Anne Churchland’s lab. He also has a nice twitter description of their work. Here, they used a technique that is able to image the whole brain simultaneously (though I am not sure to what depth), though at the cost of resolution (individual neurons are not identifiable but are averaged together). The animals are doing a task where they need to tell the difference between two tones, or two flashes of light. You can look for the brain areas involved in choice, or the areas involved in responding to vision or audio, and they are there (choice, kind of?).  But if you look at where movement is being represented it is everywhere.

The things that you would normally look for – the amount of brain activity you can explain by an animal’s decisions or its sensory responses – explain very little unique information.
This latter point is really important. If you had looked at the data and ignored the movement, you would have certainly found neurons that were correlated with decision-making. But once you take into account movement, that correlation drops away – the decisions are too correlated with general movement variables. People need to start thinking about how much of their neural data is responding to the task the animal is doing and how much is due to movement variables that are aligned to the task. This is really important! Simple averaging will not wash away this movement.

There is a lot more to both of these papers and both will be more than worth your time to dig into.

I’m not sure if you would have noticed this effect in either case if they weren’t recording from massive numbers of neurons simultaneously. This is a brave new world of neuroscience. How do we deal with this massively complex behavioral data at the same time that we deal with massive neural populations?

In my mind, the gold standard for how to analyze this data comes from Eva Naumann and James Fitzgerald in a paper out of the Engert lab. They are analyzing data from the whole brain of the zebrafish as it fictively swims around and responds to some moving background. Rather than throwing up their hands at the complexity of this data and the number of moving pieces what they did was very precisely quantify one particular aspect of the behavior. Then they followed the circuit step by step and tried to understand how the quantified behavior was transformed in the circuit. How did the visual stimuli guide the fish’s orientation in the water? What were the different ways the retina represented that visual information? How was this transformed by the relays into the brain? How was this information further transformed in the next step? How did the motor centers generate the different types of behavior that were quantified?

The brain evolved to produce behavior. In my opinion there is no way to understand the brain – any of it – if you don’t understand the behavior that the animal is producing.

Monday Open Question: How many types of neurons are there in the brain?

How many types of neurons are in the brain? Not just number, but classes that represent some fundamental unit of computation? I tweeted an article about this a couple days ago and (justly) got pilloried for saying it counted classes in the brain rather than in two cortical regions. So what is the answer for the whole brain?

Obviously the answer depends on the brain that you are talking about. In the nematode C. elegans, we know that every hermaphrodite has 302 neurons and every male has 381. I believe these specifically male neurons get pruned in the developmental process if the animal does not become a male. These neurons tend to come in symmetric pairs or quartets, one showing up on each side of the body, so the number of neural ‘classes’ is on the order of 118 – though there is evidence that some neurons can be slightly different between their left and right side (ASEL and ASER, for example). Fruit flies (Drosophila) also show sex-specific neurons, with the genes Fruitless and Doublesex controlling whether certain neurons are masculinized or feminized. So not only are there going to be different classes of neurons in males and females, we know that there are single (or, again, symmetric) neurons that control single behaviors. On the other hand, in the fruit fly retina there are definitely distinct classes of neurons that are tiled across the eye. This should frame our thinking about the number of neural classes – there are classes with large numbers of neurons where convolution is useful (repeating the same computation across some space, such as visual or auditory or even musculature space) but perhaps neural function becomes more specific and class-less once specific outputs are needed.

The fruit fly brain may seem a bit silly, why bother comparing it to us cortical mammals? But adult Drosophila have roughly the same number of neurons as larval zebrafish, a vertebrate animal with a cerebrum that is a popular organism to study in neuroscience. So do we think that the zebrafish has just as many pre-planned neurons as Drosophila? Or is its neural structure somewhat looser, more patterned? I don’t have an answer here but I think it is worth thinking about the similarities and differences in these organisms that have similar numbers of neurons but quite different environmental and developmental needs.

Let’s turn to mammals. The area with the most well-defined number of cell classes is probably the retina. I’m not sure of the up-to-date estimates for number of cell classes but the classic description has two classes (rods/cones) in the input layer of the retina which can be further split depending on the number of colors an animal can see – for instance, humans have S, M, and L cones roughly corresponding to blue, green, and red light. This review roughly estimates that further into the eye there are two types of horizontal cells (first layer), ~12 types of bipolar cells (second layer), ~30 types of amacrine cells (third layer). From other sources we think there is something on the order of ~30 types of retinal ganglion cells, the output from the eye into the brain. Interestingly, this is roughly the same number of defined classes that we think the fruit fly has! But again, there may be species specificity here; something on the order of 95%+ of the output layer of monkey retina is a single cell class. So the eye alone has at least 80 classes of neurons and quite probably more.

The olfactory bulb is probably more complex. In mice, at least, the number of olfactory glomuleri that exist is probably on the order of one or two thousand? Though I would expect that once past this layer the classification will look more like retina or cortex – on the order of tens of subtypes.


Now let’s think about the cortex. The paper that inspired this post tried to estimate the number of cell classes by using single-cell RNA-sequencing in mice to identify the transcripts that are present in different cells and then attempts to cluster them into distinct sub-classes. It should be clear up front that the number of clusters you identify (1) may not be categorical but could be continuous between types of neurons and (2) may be different than if you clustered with a different method or with different types of data – functional responses, for instance. The authors in this paper make clear that they certainly find cells that look ‘intermediate’ between their clusters so whatever categories we get may not be very firm. For instance, in the following figure the size of the circles represents the number of cells they identify in a particular cluster and the thickness of the line between two circles is how many cells are intermediate between two clusters.


Without getting into too many details, they find that in two distinct anatomical regions they find roughly 50 inhibitory neurons that are common in their transcript types between the regions suggesting that the types of inhibition may be a common, repeated pattern across the brain. However, the types of 50 excitatory neurons were essentially unique to each of the two regions..Chuck Stevens has an interesting paper where he attempts to find lower and upper bounds on the number of possible cell classes in cortex. Let’s say that we accept the tiling principle, that the same types of cells are repeated again and again in a motif:

This argument can be extended to the neocortex. Underneath 1 mm2 of most regions of the primate cortical surface are about 105 neurons — the striate cortex is an exception with twice the number — each of which covers say 0.05 mm2 with its dendritic arbor (assumed to be 0.25 mm in diameter). Twenty neurons with dendritic arbors of this size would be required to cover a square millimetre of cortex, so the upper limit on number of cell types, if each must tile the cortex, is 105/20 = 5000, or an average of 1000 per layer. Now assume that the cortex has 10 times more neurons of each type than required to cover the cortex, a redundancy factor of 10 as guessed above for hippocampus: we thus would have about 100 neuron types per layer. If we believe there are a dozen ganglion cell types, two dozen amacrine cell types, and four dozen different kinds of inhibitory neurons in the CA1 region of hippocampus, 100 cell types per layer of neucortex seems like a reasonable number – not good news for the micromodelers.

Let’s update this estimate; we think that there may be 25 excitatory cell types per region. I don’t actually know off-hand the percentage of mouse cortex that these two regions encompass (a motor region, ALM, and a visual region, VISp) but let’s say they are roughly 10% of the cortical area each (this could be grossly wrong so feel free to correct me). We then might believe that cortex has on the order of 25 * 10 ~ 250 excitatory cell classes and ~50 inhibitory cell classes. Does this feel right? 300 classes for all of cortex?

But the cortex is the minority of the number of cells in the brain – the majority is in a single structure, the cerebellum. I don’t know of an estimate of the number of neural classes but a structure that is known for its beautiful tiling neurons seems more likely to have a fair bit of structure in its number of cell classes. What would we estimate here? Something similar to a primary sensory area, with ~50-100 cell classes? Something more something less?

And what about other subcortical regions in the brain like hypothalamus that are more directly responsible for specific behavior? Should we expect many thousands of distinct subtypes for each of the behaviors or something more patterned?

Tell me where I’m wrong.

Two views of science

The pessimist:

These quotes give you a sense of these two books, both of which build on what Alan Richardson calls “one of the great lessons of the cognitive revolution”: “just how much of mental life remains closed to introspection.” As a brief summation, the unified thesis of Nørretranders’s and Wilson’s works looks something like this: We are not really in control. Not only are we not in control, but we are not even aware of the things of which we are not in control. Our ability to judge anything with any accuracy is a lie, as is our ability to perceive these lies as lies. Consciousness masquerades as awareness and agency, but the sense of self it conjures is an illusion. We are stranded in the great opaque secret of our biology, and what we call subjectivity is a powerless epiphenomenon, sort of like a helpless rider on the back of a galloping horse—the view is great, but pulling on the reins does nothing.

If this description of reality feels familiar to you, it’s because such a neuroscientifically inspired pessimism is a quiet but powerful strain of modern thinking.

The optimists:

The beauty of a living thing is not the atoms that go into it, but the way those atoms are put together (Carl Sagan)

Poets say science takes away from the beauty of the stars – mere globs of gas atoms. I too can see the stars on a desert night, and feel them. But do I see less or more? The vastness of the heavens stretches my imagination – stuck on this carousel my little eye can catch one – million – year – old light. A vast pattern – of which I am a part… What is the pattern, or the meaning, or the why? It does not do harm to the mystery to know a little about it. For far more marvelous is the truth than any artists of the past imagined it. Why do the poets of the present not speak of it? What men are poets who can speak of Jupiter if he were a man, but if he is an immense spinning sphere of methane and ammonia must be silent? (Richard Feynman)

These are not necessarily mutually exclusive.

But I also found this Feynman poem, which I had never heard before:

…I stand at the seashore, alone, and start to think.

There are the rushing waves, mountains of molecules
Each stupidly minding its own business
Trillions apart, yet forming white surf in unison

Ages on ages, before any eyes could see
Year after year, thunderously pounding the shore as now
For whom, for what?
On a dead planet, with no life to entertain

Never at rest, tortured by energy
Wasted prodigiously by the sun, poured into space
A mite makes the sea roar

Deep in the sea, all molecules repeat the patterns
Of one another till complex new ones are formed
They make others like themselves
And a new dance starts

Growing in size and complexity
Living things, masses of atoms, DNA, protein
Dancing a pattern ever more intricate

Out of the cradle onto the dry land
Here it is standing
Atoms with consciousness, matter with curiosity
Stands at the sea, wonders at wondering

I, a universe of atoms
An atom in the universe

(This is obviously a response to one of my favorite poems, When I Have Fears That I May Cease To Be)

#Cosyne18, by the numbers

Where does the time go? Another year, another look at my favorite conference: Cosyne. Cosyne is a Computational and Systems Neuroscience conference, this year held in Denver. I think it’s useful to use it each year to assess where the field is and where it may be heading.

First is who is the most active – and this year it is Ken Harris who I dub this year’s Hierarch of Cosyne. The most active in previous years are:

  • 2004: L. Abbott/M. Meister
  • 2005: A. Zador
  • 2006: P. Dayan
  • 2007: L. Paninski
  • 2008: L. Paninski
  • 2009: J. Victor
  • 2010: A. Zador
  • 2011: L. Paninski
  • 2012: E. Simoncelli
  • 2013: J. Pillow/L. Abbott/L. Paninski
  • 2014: W. Gerstner
  • 2015: C. Brody
  • 2016: X. Wang
  • 2017: J. Pillow

If you look at the most across all of Cosyne’s history, well nothing ever changes.

Visualizing the network diagram of co-authorships reveals some of the structure in the computational neuroscience community (click for high-resolution PDF):and zooming in:

Plotting the network of the whole history of Cosyne is a mess – there are too many dense connections. Here are three other ways of looking at it. First, only plotting the superusers (people who have 20+ abstracts across Cosyne’s history, click for PDF):

Or alternately, the regulars (10+ abstracts across Cosyne’s history, click for PDF):

And, finally, the regulars + everyone they have collaborated with (click for PDF):

I’d say the long-term structure looks something like the New York Gang (green), the European Crew (purple), the High-Dimensional Deities (blue), the Ecstasy of Entropy (magenta), and some others that I can’t come up with good names for (comments welcome).

Memming asked whether the central cluster was getting more dispersed or less cliquey with time. This is kind of a hard question to answer. If you just look at how large the central connected group is over time the answer is a resounding no. The community is more cohesive and is more connected than ever before.

On the other hand, we can look within that central cluster. How tightly connected is it? If you look at mean path length – how long it takes to get from one author to another, like degrees of Kevin Bacon or an Erdos number (a Paninski number?) – then the largest cluster is becoming more dispersed. Dan Marinazzo suggested looking at the network efficiency as a metric that is more robust to size. Network efficiency is kind of the inverse of path length, where one would mean you can get from one author to another in a single step and 0 means it takes forever.

I now also have two years of segmented abstracts (both accepted and rejected). What are the most popular topics at Cosyne? I used doc2vec, a method that can take a document and embed it in a high-dimensional space that represents the semantic topics that are being used, and then visualized it with t-SNE. The Cosyne Island that you see above is the density of abstracts at each given point. I’ve given the different islands names that represent the abstracts in each of them.

If you look at the words that you see more in 2018’s accepted abstracts they are “movements”, “uncertainty”, “motion”; looks like behavior!

The rejected abstracts are “orientation”, “techniques”, “highdimensional”,”retinal”, “spontaneous” 😦

We can also look at words that are more likely to be accepted in 2018 than 2017 (which are the big gainers):

And the big losers this year versus last year:

Here is a list of the twitter accounts that will be at Cosyne.

Previous years: [201420152016, 2017]