#cosyne14 day 3: Genes, behavior, and decisions

For other days (as they appear): 1, 2, 4deermouse - manisculatus

How do genes contribute to complex behavior?

Cosyne seems to have a fondness for inviting an ecogically-related researcher to remind us computational scientists that we’re actually studying animals that exist in, you know, an environment. Last year it was ants, this year deer mice.

Hopi Hoekstra gave an absolutely killer talk on a fairly complex behavior that is seen in deer mice: house building! Or rather, nest building. These mice will burrow to make a stereotyped nest with an entrance tunnel, a small nest, and an escape hatch that doesn’t quite make it to the surface (see below). But not every species of deer mouse builds their nest in precisely the same way. Only one (peromyscus) will build escape tunnels. Most will only make small little entrance tunnels (and possibly no nest?). Some don’t seem to dig at all. What causes this difference?

They crossed the species that makes long entrance tunnels and escape tunnels with a deer mouse nestrecently-diverged species (polionatus) that makes short entrance tunnels. These little guys will make tunnels that span the range from tiny to long, which suggests a multigenic trait. They did QTL on these crosses and found that only five genes are required for controlling nest building! One gene controls the construction of the escape tunnel, and four (three?) genes control the length of the entrance-tunnel length in an additive manner. One of the genes that is controlling tunnel length is an acetylcholine receptor in the basal ganglia (read: neuromodulator receptor in the ‘motivating’ part of the brain) that has been linked to addiction in other animals.

How many different behaviors do we have?

One of the themes that seemed to pop up this year was how to quantify animal behavior. It’s really not that obvious: is a reach for a coffee mug the same as a reach for my cell phone? Maybe, maybe not. Gordon Berman took their analytic tools to fly behavior in an attempt to map their ‘behavioral space’.

Screen shot 2014-03-02 at 8.09.40 AM

And okay, they were able to extract what look like unique behaviors: abdomen movements and wing movements and such. Okay, but that’s pretty hard for me to have an opinion on; what really sold me is when they decomposed a video of someone doing the hokey pokey. That gave them a hokey-pokey space which really corresponded to putting the left foot in, and also the left foot out, not to mention shaking things all about. It’s a shame that image is not up on the arXiv…

You know a talk is good when you start off incredibly skeptical and end up nodding along fervently by the end.

How do dopamine neurons signal prediction error?

Dopamine neurons are known to signal what is called ‘prediction error’: the difference between the expected reward and the received reward. How exactly are they doing it? Neir Eshel recorded from dopamine neurons (I missed where exactly) to expected and unexpected rewards. If you look at the reward vs. spike rate curve, they fit very well to a Hill function. In fact, every neuron they record from looks the same up to some multiplicative scaling factor. That’s a bit surprising to me because I thought there was much more heterogeneity in how, exactly, dopamine neurons respond to rewards…??

But they also find that the response to expected reward for any given neuron is the same Hill function as for the unexpected reward with some constant subtracted. They claim that this is beneficial because it allows even slowly responding neurons to contribute to prediction error without hitting the zero lower bound; I missed the logic of this when scribbling notes, though.

References
Gordon J. Berman, Daniel M. Choi, William Bialek, & Joshua W. Shaevitz (2013). Mapping the structure of drosophilid behavior arXiv arXiv: 1310.4249v1

Weber JN, Peterson BK, & Hoekstra HE (2013). Discrete genetic modules are responsible for complex burrow evolution in Peromyscus mice. Nature, 493 (7432), 402-5 PMID: 23325221
Photo from

#cosyne14 day 2: what underlies our neural representation of the world?

Now that I’ve been armed with a tiny notepad, I’m being a bit more successful at remembering what I’ve seen. For other days (as they appear): 1, 3, 4

Connectivity and computations

Screen shot 2014-03-01 at 10.31.21 AMThe second day started with a talk by Thomas Mrsic-Flogel motivated by the question of, how does the organization of the cortex give rise to computations? He focused on connectivity between excitatory neurons in layer 2/3 of V1 in mice. Traditionally when we think of these neurons, we think of how they respond to visual stimulation: what patterns of light activity, what shapes or edges are they responding to? This ‘receptive field’ has a characteristic shape and (tends to) respond to certain orientations of edges [see left]. They receive input from more primary visual neurons, but you still want to know: what type of input do they receive from other neurons in the same layer?

By imaging the neurons during behavior and then making posthumous brain slices, Screen shot 2014-03-01 at 10.31.33 AMthey are able to match the direct connectivity with the visual responses. It turns out that the neurons they connect to are most likely to be neurons that respond in a similar way. Yet despite our fetish for connectivityomics, it is not the fact of connections that matter but the strength of those connections. And if you look at the excitatory neurons that are providing input to another postsynaptic neuron, the weighted sum of their response is exactly what the postsynaptic neuron responds to!

So theoretically, if you cut off all the external input to that neuron, it would still respond to the same visual input as before. In fact, if you only use the strongest 12% of the connections, that’s enough to maintain the visual representation. Of course, L2/3 neurons do receive external input so this mechanism is probably for denoising?

Everything’s non-linear Hebb

A long-standing question with a thousand answers is why primary visual neurons respond in the manner that they do (first image above). There have been several theories (most notably from Olshausen (1996) and Bell & Sejnowski (1996)) dealing with sparsity of responses or the fact that these are the optimal independent components of natural images. But the strange fact is that almost everything you do gives these same receptive fields! Why is that? Carlos Stein Naves de Brito (whew) dug into it and found that the commonality to all these algorithms is that they are essentially implementing a non-linear Hebbian learning rule ($\delta w \prop x f(wx)$). One result from ICA is that it doesn’t matter which nonlinearity $f$ you use, because if it doesn’t work you can just use $-f$ and it will… so this is a very nice result. The paper will be well worth reading.

Maximum entropy, minimal assumptions

Screen shot 2014-03-01 at 1.36.54 PMElad Schneidman gave his normal talk about using maximum entropy models to understand the neural code. Briefly, there is a class of statistical relationships between observed data that uses as few assumptions about the organization of that data as possible (see also: Ising models). If you use a model that only looks at first-order correlations, ie correlations between pairs of neurons, that’s enough to describe how populations of neurons will respond to white noise.

But it turns out that it’s not enough to describe their response natural stimuli! The correlations induced by these stimuli must trigger fundamentally different computations than the white noise. The model that does work is something they call the reliable interaction model (RIM). It uses few parameters and fits using only the most common patterns (instead of trying to find all orders of correlation, ie correlations between triplets of neurons etc). This fits extremely well which suggests that a high-order interaction network underlies a highly structured neural code.

If you then examine the population responses to stimuli, you’ll find that the brain responds to the same stimulus with different population responses. They’re using this to construct a ‘thesaurus’ of words, in which they find high structure when using the Jensen-Shannon divergence D(p(s|r1),p(s|r2)). What I think they are missing (and are going to miss with their analyses) is a focus on the dynamics. What is the context that in which each synonymous word arises? Why are there so many synonymous words? etc. But it promises to be pretty interesting.

Why so many neurons?

When we measure the response of a population of neurons to something that stimulates them, it often seems like the dimensionality of the stimulus (velocity, orientation, etc) is much, much lower than the number of neurons being used to represent it. So why do we need so many neurons? Is it not more efficient to just use one neuron per dimension?

I didn’t entirely follow the logic of Peiran Gao’s talk (I got distracted by twitter…) but they relate it to the complexity of the task and say that random projection theory predicts how many neurons are needed, which is much  more than the dimensionality of the task.

References

Ganmor E, Segev R, & Schneidman E (2011). Sparse low-order interaction network underlies a highly correlated and learnable neural population code. Proceedings of the National Academy of Sciences of the United States of America, 108 (23), 9679-84 PMID: 21602497

Cosyne, Day 1

To sum up day 1: I forgot my phone charger and all my toiletries and managed to lose my notebook by the end of the first lecture…! But I brought my ski gear, so there’s that. Mental priorities. For other days (as they appear): 23, 4

Motor controlTom Jessel gave the opening talk on motor control. The motor cortex must send a command, or motor program, down the spinal cord but this causes a latency problem. It takes too much time to go down the spinal cord and back to have an appropriate error signal sent back (in case something goes wrong.) To solve the problem, the motor system keeps a local internal copy (PN, left). A simple model from engineering says that if you disrupt this gating, you can no longer control the gain of the movement and will get oscillations. So when Jessel interferes with PN activity, a mouse that would normally reach directly for a pellet instead moves it’s paw up and down in a slow forward circle – oscillating! I think that he also implicated a signal that directly modifies presynaptic release through GABA in this behavior.

(Apologies if this is wrong, as I said, I lost my notebook and am relying on memory for this one.)

References

Azim E, Jiang J, Alstermark B, & Jessell TM (2014). Skilled reaching relies on a V2a propriospinal internal copy circuit. Nature PMID: 24487617