# #Cosyne2016, by the numbers

Cosyne is the systems and computational neuroscience conference held every year in Salt Lake City and Snow Bird. It is a pretty good representation of the direction the community is heading…though given the falling acceptance rate you have to wonder how true that will stay especially for those on the ‘fringe’. But 2016 is in the air so it is time to update the Cosyne statistics.

I’m always curious about who is most active in any given year and this year it is Xiao-Jing Wang who I dub this year’s Hierarch of Cosyne. I always think of his work on decision-making and the speed-accuracy tradeoff. He has used some very nice modeling of small circuits to show how these tasks could be implemented in nervous systems. Glancing over his posters, though, and his work this year looks a bit more varied.

Still, it is nice to see such a large clump of people at the top: the distribution of posters is much flatter this year than previously which suggests a bit of

• 2004: L. Abbott/M. Meister
• 2006: P. Dayan
• 2007: L. Paninski
• 2008: L. Paninski
• 2009: J. Victor
• 2011: L. Paninski
• 2012: E. Simoncelli
• 2013: J. Pillow/L. Abbott/L. Paninski
• 2014: W. Gerstner
• 2015: C. Brody
• 2016: X. Wang

If you look at the total number across all years, well, Liam Paninski is still massacring everyone else. At this rate, even if Pope Paninski doesn’t submit any abstracts over the next few years and anyone submits six per year… well it will be a good half a decade before he could possibly be dethroned.

The network diagram of co-authors is interesting, as usual. Here is the network diagram for 2016 (click for PDF):

And the mess that is all-time Cosyne:

I was curious about this network. How connected is it? What is its dimensionality? If you look at the eigenvalues of the adjacency matrix, you get:

I put the first two eigenvectors at the bottom of this post, but suffice it to say the first eigenvector is basically Pouget vs. Latham! And the second is Pillow vs Paninski! So of course, I had to plot a few people in Pouget-Pillowspace:

(What does this tell us? God knows, but I find it kind of funny. Pillowspace.)

Finally, I took a bunch of abstracts and fed them through a Markov model to generate some prototypical Cosyne sentences. Here are abstracts that you can submit for next year:

• Based on gap in the solution with tighter synchrony manifested both a dark noise [and] much more abstract grammatical rules.
• Tuning curves should not be crucial for an approximate Bayesian inference which would shift in sensory information about alternatives
• However that information about 1 dimensional latent state would voluntarily switch to odor input pathways.
• We used in the inter vibrissa evoked responses to obtain time frequency use of power law in sensory perception such manageable pieces have been argued to simultaneously [shift] acoustic patterns to food reward to significantly shifted responses
• We obtained a computational capacity that is purely visual that the visual information may allow ganglion cells [to] use inhibitory coupling as NMDA receptors, pg iii, Dynamical State University
• Here we find that the drifting gratings represent the performance of the movement.
• For example, competing perceptions thereby preserve the interactions between network modalities.
• This modeling framework of goal changes uses [the] gamma distribution.
• Computation and spontaneous activity at the other stimulus saliency is innocuous and their target location in cortex encodes the initiation.
• It is known as the presentation of the forepaw target reaction times is described with low dimensional systems theory Laura Busse Andrea Benucci Matteo Carandini Smith-Kettlewell Eye Research.

Note: sorry about the small font size. This is normally a pet peeve of mine. I need to get access to Illustrator to fix it and will do so later…

The first two eigenvectors:

# #Cosyne2015, by the numbers

Another year, another Cosyne. Sadly, I will be there only in spirit (and not, you know, reality.) But I did manage to get my hands all over the Cosyne abstract authors data…I can now tell you everyone who has had a poster or talk presented there and who it was with. Did you know Steven Pinker was a coauthor on a paper in 2004?!

This year, the winner of the ‘most posters’ award (aka, the Hierarch of Cosyne)  goes to Carlos Brody. Carlos has been developing high-throughput technology to really bang away at the hard problem of decision-making in rodents, and now all that work is coming out at once. Full disclosure notice, his lab sits above me and they are all doing really awesome work.

Here are the Hierarchs, historically:

• 2004: L. Abbott/M. Meister
• 2006: P. Dayan
• 2007: L. Paninski
• 2008: L. Paninski
• 2009: J. Victor
• 2011: L. Paninski
• 2012: E. Simoncelli
• 2013: J. Pillow/L. Abbott/L. Paninski
• 2014: W. Gerstner
• 2015: C. Brody

Above is the total number of posters/abstracts by author. There are prolific authors, and there is Liam Paninski. Congratulations Liam, you maintain your iron grip as the Pope of Cosyne.

As a technical note, I took ‘unique’ names by associating first letter of the name with last name. I’m pretty sure X. Wang is at least two or three different people and some names (especially those with an umlaut or, for some reason, Paul Schrater) are especially likely to change spelling from year to year. I tried correcting a bit, but fair warning.

As I mentioned last year, the distribution of posters follows a power law.

But now we have the network data and it is pretty awesome to behold. I was surprised that if we just look at this year’s posters, there is tons of structure (click here for a high-res, low-size PDF version):

When you include both 2014 and 2015, things get even more connected (again, PDF version):

Beyond this it starts becoming a mess. The community is way too interconnected and lines fly about every which way. If anyone has an idea of a good way to visualize all the data (2004-2015), I am all ears. And as I said, I have the full connectivity diagram so if anyone wants to play around with the data, just shoot me an email at adam.calhoun at gmail.

Any suggestions for further analyses?

# Open Question: Do we need a new Cosyne?

UCSD started one of the first (the first?) computational neuroscience departments. But when I started graduate school there, it was being folded into the general Neuroscience department; now it is just a specialization within the department. Why? Because we won. Because people who used to be computational neuroscientists are now just – neuroscientists. I could tell there was a change at UCSD when people trained in electrical engineering instead of biology didn’t even feel the need to join the specialization. What used to be a unique skill is becoming more and more common.

I have been thinking about this for the last few days after news trickled out about acceptances and rejections at Cosyne (note: I did not submit an abstract to the Cosyne main meeting.) The rejection rate this year was around 40%. Think about this for a minute: nearly half of the people who had wanted to present what they had been working on to their peers were not able to do so.

Now, people go to conferences for a wide variety of reasons. Some go to socialize, some to hear talks, some for a vacation. But the most important reason is to communicate your new research to your peers. And it’s a serious problem when half of the community just can’t do that.

Cosyne fills the very important role of bringing together the Computational and Systems fields of neuroscience (hence, CoSyNe). But when it was founded in 1996, this was not a big group of people. Perhaps the field has just gotten too big to accommodate everyone in one, medium sized conference; either the conference must grow or people need to flee to more specialized grounds – and repeat the process of growth and rebirth.

At dinner recently, I mentioned that it may be time for some smaller conferences to split off from Cosyne. Heads nodded in agreement; it’s not just me being contrary. There are other computational conferences – CNS, NIPS, SAND, RLDM. But none of them reside in the niche of Cosyne, none of them bring together experimentalists and theorists in the same way. The closest is RLDM which occupies a kind of intersection of Cosyne and Machine Learning. (edit: there is also CodeNeuro, though I don’t yet have a sense of the community there.)

We need more of that.

# The Hierarch of Cosyne

Attention warning: I appear to be in the list-making and ranking mood, lately. This list is probably not 100% accurate. This is from every Cosyne except 2012 & 2013 (seriously, get the list of posters up in a non-PDF form, I ain’t scraping that.)

I thought that a good way to get a handle on who is active in the computational neuroscience community would be to see who presents the most posters at Cosyne. Presumably, the more active you are, the more posters that you will have. There are obvious biases here: bigger labs will have more posters, international researchers have a harder time making it to Cosyne, and some people (eg Terry Sejnowski) just aren’t interested in showing up. So take this for what it is.

This year the winner of the ‘Most Posters’ award (aka, The Hierarch of Cosyne) was Wulfram Gerstner with 6 posters, followed by Jonathan Pillow, Tatyana Sharpee, and Maneesh Sahani with 5.

Historically, the number of posters follows a power law (with obeisance given to Cosma Shalizi, noting this is probably not a power law and I’m too lazy to test it.)

Here is the ranking of most Cosyne posters aka the “Pope of Cosyne” award – remember that I’m unfortunately omitting 2012-2013:

(32) Liam Paninski

(22) Maneesh Sahani

(18) Jonathan Pillow

(16) Wei Ji Ma

I was going to make a connectivity diagram but I realized I have no idea how! If anyone has a tool that is easy to use, let me know.

(Incidentally, the most common last name was ‘Wang’ followed by ‘Paninski’)

# #cosyne14 day 3: Genes, behavior, and decisions

For other days (as they appear): 1, 2, 4

How do genes contribute to complex behavior?

Cosyne seems to have a fondness for inviting an ecogically-related researcher to remind us computational scientists that we’re actually studying animals that exist in, you know, an environment. Last year it was ants, this year deer mice.

Hopi Hoekstra gave an absolutely killer talk on a fairly complex behavior that is seen in deer mice: house building! Or rather, nest building. These mice will burrow to make a stereotyped nest with an entrance tunnel, a small nest, and an escape hatch that doesn’t quite make it to the surface (see below). But not every species of deer mouse builds their nest in precisely the same way. Only one (peromyscus) will build escape tunnels. Most will only make small little entrance tunnels (and possibly no nest?). Some don’t seem to dig at all. What causes this difference?

They crossed the species that makes long entrance tunnels and escape tunnels with a recently-diverged species (polionatus) that makes short entrance tunnels. These little guys will make tunnels that span the range from tiny to long, which suggests a multigenic trait. They did QTL on these crosses and found that only five genes are required for controlling nest building! One gene controls the construction of the escape tunnel, and four (three?) genes control the length of the entrance-tunnel length in an additive manner. One of the genes that is controlling tunnel length is an acetylcholine receptor in the basal ganglia (read: neuromodulator receptor in the ‘motivating’ part of the brain) that has been linked to addiction in other animals.

How many different behaviors do we have?

One of the themes that seemed to pop up this year was how to quantify animal behavior. It’s really not that obvious: is a reach for a coffee mug the same as a reach for my cell phone? Maybe, maybe not. Gordon Berman took their analytic tools to fly behavior in an attempt to map their ‘behavioral space’.

And okay, they were able to extract what look like unique behaviors: abdomen movements and wing movements and such. Okay, but that’s pretty hard for me to have an opinion on; what really sold me is when they decomposed a video of someone doing the hokey pokey. That gave them a hokey-pokey space which really corresponded to putting the left foot in, and also the left foot out, not to mention shaking things all about. It’s a shame that image is not up on the arXiv…

You know a talk is good when you start off incredibly skeptical and end up nodding along fervently by the end.

How do dopamine neurons signal prediction error?

Dopamine neurons are known to signal what is called ‘prediction error’: the difference between the expected reward and the received reward. How exactly are they doing it? Neir Eshel recorded from dopamine neurons (I missed where exactly) to expected and unexpected rewards. If you look at the reward vs. spike rate curve, they fit very well to a Hill function. In fact, every neuron they record from looks the same up to some multiplicative scaling factor. That’s a bit surprising to me because I thought there was much more heterogeneity in how, exactly, dopamine neurons respond to rewards…??

But they also find that the response to expected reward for any given neuron is the same Hill function as for the unexpected reward with some constant subtracted. They claim that this is beneficial because it allows even slowly responding neurons to contribute to prediction error without hitting the zero lower bound; I missed the logic of this when scribbling notes, though.

References
Gordon J. Berman, Daniel M. Choi, William Bialek, & Joshua W. Shaevitz (2013). Mapping the structure of drosophilid behavior arXiv arXiv: 1310.4249v1

Weber JN, Peterson BK, & Hoekstra HE (2013). Discrete genetic modules are responsible for complex burrow evolution in Peromyscus mice. Nature, 493 (7432), 402-5 PMID: 23325221
Photo from

# #cosyne14 day 2: what underlies our neural representation of the world?

Now that I’ve been armed with a tiny notepad, I’m being a bit more successful at remembering what I’ve seen. For other days (as they appear): 1, 3, 4

Connectivity and computations

The second day started with a talk by Thomas Mrsic-Flogel motivated by the question of, how does the organization of the cortex give rise to computations? He focused on connectivity between excitatory neurons in layer 2/3 of V1 in mice. Traditionally when we think of these neurons, we think of how they respond to visual stimulation: what patterns of light activity, what shapes or edges are they responding to? This ‘receptive field’ has a characteristic shape and (tends to) respond to certain orientations of edges [see left]. They receive input from more primary visual neurons, but you still want to know: what type of input do they receive from other neurons in the same layer?

By imaging the neurons during behavior and then making posthumous brain slices, they are able to match the direct connectivity with the visual responses. It turns out that the neurons they connect to are most likely to be neurons that respond in a similar way. Yet despite our fetish for connectivityomics, it is not the fact of connections that matter but the strength of those connections. And if you look at the excitatory neurons that are providing input to another postsynaptic neuron, the weighted sum of their response is exactly what the postsynaptic neuron responds to!

So theoretically, if you cut off all the external input to that neuron, it would still respond to the same visual input as before. In fact, if you only use the strongest 12% of the connections, that’s enough to maintain the visual representation. Of course, L2/3 neurons do receive external input so this mechanism is probably for denoising?

Everything’s non-linear Hebb

A long-standing question with a thousand answers is why primary visual neurons respond in the manner that they do (first image above). There have been several theories (most notably from Olshausen (1996) and Bell & Sejnowski (1996)) dealing with sparsity of responses or the fact that these are the optimal independent components of natural images. But the strange fact is that almost everything you do gives these same receptive fields! Why is that? Carlos Stein Naves de Brito (whew) dug into it and found that the commonality to all these algorithms is that they are essentially implementing a non-linear Hebbian learning rule ($\delta w \prop x f(wx)$). One result from ICA is that it doesn’t matter which nonlinearity $f$ you use, because if it doesn’t work you can just use $-f$ and it will… so this is a very nice result. The paper will be well worth reading.

Maximum entropy, minimal assumptions

Elad Schneidman gave his normal talk about using maximum entropy models to understand the neural code. Briefly, there is a class of statistical relationships between observed data that uses as few assumptions about the organization of that data as possible (see also: Ising models). If you use a model that only looks at first-order correlations, ie correlations between pairs of neurons, that’s enough to describe how populations of neurons will respond to white noise.

But it turns out that it’s not enough to describe their response natural stimuli! The correlations induced by these stimuli must trigger fundamentally different computations than the white noise. The model that does work is something they call the reliable interaction model (RIM). It uses few parameters and fits using only the most common patterns (instead of trying to find all orders of correlation, ie correlations between triplets of neurons etc). This fits extremely well which suggests that a high-order interaction network underlies a highly structured neural code.

If you then examine the population responses to stimuli, you’ll find that the brain responds to the same stimulus with different population responses. They’re using this to construct a ‘thesaurus’ of words, in which they find high structure when using the Jensen-Shannon divergence D(p(s|r1),p(s|r2)). What I think they are missing (and are going to miss with their analyses) is a focus on the dynamics. What is the context that in which each synonymous word arises? Why are there so many synonymous words? etc. But it promises to be pretty interesting.

Why so many neurons?

When we measure the response of a population of neurons to something that stimulates them, it often seems like the dimensionality of the stimulus (velocity, orientation, etc) is much, much lower than the number of neurons being used to represent it. So why do we need so many neurons? Is it not more efficient to just use one neuron per dimension?

I didn’t entirely follow the logic of Peiran Gao’s talk (I got distracted by twitter…) but they relate it to the complexity of the task and say that random projection theory predicts how many neurons are needed, which is much  more than the dimensionality of the task.

References

Ganmor E, Segev R, & Schneidman E (2011). Sparse low-order interaction network underlies a highly correlated and learnable neural population code. Proceedings of the National Academy of Sciences of the United States of America, 108 (23), 9679-84 PMID: 21602497

# Cosyne, Day 1

To sum up day 1: I forgot my phone charger and all my toiletries and managed to lose my notebook by the end of the first lecture…! But I brought my ski gear, so there’s that. Mental priorities. For other days (as they appear): 23, 4

Tom Jessel gave the opening talk on motor control. The motor cortex must send a command, or motor program, down the spinal cord but this causes a latency problem. It takes too much time to go down the spinal cord and back to have an appropriate error signal sent back (in case something goes wrong.) To solve the problem, the motor system keeps a local internal copy (PN, left). A simple model from engineering says that if you disrupt this gating, you can no longer control the gain of the movement and will get oscillations. So when Jessel interferes with PN activity, a mouse that would normally reach directly for a pellet instead moves it’s paw up and down in a slow forward circle – oscillating! I think that he also implicated a signal that directly modifies presynaptic release through GABA in this behavior.

(Apologies if this is wrong, as I said, I lost my notebook and am relying on memory for this one.)

References

Azim E, Jiang J, Alstermark B, & Jessell TM (2014). Skilled reaching relies on a V2a propriospinal internal copy circuit. Nature PMID: 24487617

# Cosyne: Foraging!

I think I have found my people.  The workshops after the main Cosyne meeting were smaller and more focused, and really allowed you to delve into a topic.  I spent the first day at the Neural Mechanisms of Foraging workshop and found myself a bunch of neuroecologists!

I think I’m just going to summarize a bunch of talks instead of any one individually.  I missed the first few minutes of introduction, but I got the impression that this was the first meeting of ‘neuroforagers’ to ever actually take place; Michael Platt called it a “coming out party for foraging”.  Foraging – to define it briefly – is the decision to leave a reward source to explore new options.  It’s apparently a great task for monkeys too – many basic behaviors that we train monkeys to perform can take a long time to train; teaching them to do foraging happens in a single session.  It’s totally natural which is itself a reason for why we should be studying it!

There were two recurring themes at the talks – the anterior cingulate cortex (ACC) is the foraging center and that economics approaches aren’t doing much good.  Talk after talk recorded from the ACC or studied how ACC activity is shaped.  Just like the Basal Ganglia meeting that The Cellular Scale attended, every talk included The Dopamine Slide.  Michael Platt suggested at the end that he hoped at foraging meetings every talk would include a figure from one of his papers that I have now forgotten!  Well, I don’t do ACC so probably not for me anyway.

The other theme was the failure of economic models to explain behavior.  Talk after talk included some variant of, “we tried fitting this to a [temporal discounting/risk-preference/reinforcement-learning/optimal foraging]  model but it didn’t account for the data”.  Almost all of them said that!  The naive assumption that we should move to optimize immediate reward is, somehow, failing.  Some kind of new principle (or perhaps better model-fitting) will be needed to consistently explain actual behavior.

# Cosyne: The Big Talks

Besides the great stuff on decision-making, the other part of the main meeting I wanted to discuss were some of the Big Talks.  This is the place where some of the Big Guns of neuroscience were just doing their thing, talking about neuroscience.

Bill Bialek

Eve Marder

Eve Marder studies the lobster stomatogastric ganglia (STG) which is the neural system that controls the stomach, basically.  It’s a great setup and has yielded tons of interesting results but there wasn’t exactly tons that was new in the talk.  Fortunately, Marder is an excellent lecturer and it was interesting throughout.  The most interesting comment that she made is that they actually know the whole connectome of the STG and have the ability to record from neurons in the system for weeks at a time!  And yet they still don’t know how it all works.  Take that, connectomists.

If you are interested at all in her work or seeing her talk – and you should be – watch this short youtube series on her work.

Terry Sejnowski

Terry Sejnowski gave a talk in two parts.  Strangely, the first part had little to do with his own research and was instead used as a thematic introduction to the second part of his talk.  He spent his time explaining how a camera based on a simple model of the retina – spiking only when it saw a new edge moved across the field of view – was able to naturally accomplish things such as identifying objects that researchers in computer vision have spent decades trying to do, only somewhat successfully.  And emulating the retina naturally endows it with other great features such as amazing temporal precision and extremely low energy usage.  See? Neuroscience is useful.

This led him to the second part of his talk: the Brain Activity Map (BAM).  He started off telling the story of how the NYT article came into being, and why it seemed so sketchy.  Basically: when Obama mentioned brain research in his State of the Union, a member of NIH that knew about the project happened to tweet, “Obama supports the Brain Activity Map!” (or something similar).  From this one little tweet, the reporter’s instincts kicked in – he’d certainly never heard of any “Brain Activity Map” – and, after calling around to his sources, got the scoop on BAM.  Sejnowski was here to finally let the rest of us neuroscientists in on what was going on.

I think a lot of what he talked about has been released in the recent Science perspective, but he certainly was excited about having met Obama… He also said (I think) that the BAM proposal was on a list of ten big science projects, and it beat them out.  The data that will be stored – the activity of thousands or even millions of cells simultaneously – will be enormous.  Microsoft, Google, and Qualcomm were in on the meetings and apparently basically said, “let us deal with that.”  Since the data size is so enormous, the idea will be to have “brain observatories” where the data will reside; the data will be open access and analyses will be done on computers at the ‘observatories’.  That way, no one has to worry about downloading the data sets!

Of course, the thing on everyone’s mind is whether funding for BAM will take away funding from other basic neuroscience research.  When that came up in questions, Eve Marder said that the NIH heads have been discussing it and they want to make sure that there are no reductions in R01s (the ‘basic’ grant given to researchers).  Basically, this is just more money.  If it ever passes (to quote Sejnowski: “the hope is that both Democrats and Republicans have brains”).

Again, the important point here is that Sejnowski was really, really excited and it was kind of adorable.

Tony Movshon

Ah, Tiny Movshon (as my iphone kept trying to autocorrect).  This was by far my favorite talk of the meeting, mostly because Movshon basically trolled all the rodent vision people in the audience.  He gave a great, contradictory 45 minute talk about how if you’re not doing primate vision, you’re wasting your time.  Okay, he really said that the mouse visual system is too different from the primate one to be of any use.  Because it’s too evolutionarily far away.  Except the cat visual system is great!  Even though the cat is even more evolutionarily distant.  But whatever, his solution to the problem of not being able to do genetics in monkeys and needing a replacement for mice is – wait for it – the mouse lemur!  Obviously.  Here is your future cuddly neuroscience overlord:

Unfortunately, I don’t think we really know anything about the mouse lemur yet, but that shouldn’t stop us from replacing all our mice with this cute little guy!

Anyway, at the end of the talk it was like someone disturbed a bees’ nest.  All the rodent vision people were visibly distressed; would it be mean to say that as a lonely invertebrate person it was a nice bit of schadenfreude?

# Cosyne: Decision-making

I spent a week recently in Salt Lake City at the Cosyne (COmputational and SYstems NEuroscience); people had told me that it was their favorite conference, and now I understand why.  Other attendees have put up their reactions, so I figure it’s about time I got off the couch and did the same.

Probably the biggest effect this meeting had on me is that I started using twitter a bit more seriously – follow me at @neuroecology – and participated in my first “tweet-up” (is that really a thing?).  There are lots of great neuroscientists tweeting though there should be more!

For simplicity of organization, there will be three posts on Cosyne: one on a few decision-making talks, one on The Big Talks, and one on neural correlates of foraging.

Carlos Brody

On decision-making, the first (and longest) talk was by Carlos Brody.  His talk was focused on the question of how we make decisions in noisy environments.  In this case, rats had to sit around listening to two speakers emit clicks at random (poisson) intervals and decide which speaker, left or right, was clicking more.  We typically think of the way animals make these types of decisions is as a ‘noisy integrator‘: each point of information – each click – is added up with some noise thrown in there because we’re imperfect, forgetful, and the environment (and our minds!) are full of noise.  The decision is then made when enough information has been accumulated that the animal can be confident in going one direction or another.

One small problem with this is that there are a lot of models that are consistent with the behavioral data.  How noisy is the internal mind?  Is it noisy at all?  How forgetful are we?  That sort of thing.  The Brody lab fit the data to many models and found that the one that most accurately describes the observed behavior is a slow accumulator that is leaky (ie a bit forgetful) but where the only noise is from the sensory input!  Actually, I have in my notes that is ‘lossless’ but also that it is ‘leaky’ so I’m not sure which of the two is accurate, but the important detail is that once the information is in the brain it gets computed on perfectly and our integration is noiseless; all the noise in the system comes from the sensory world.

They also recorded from two areas in the rat brain, the posterior parietal cortex (PPC) and the frontal orienting fields (FOF).  The PPC is an area analogous to LIP where neural activity looks like it is integrating information; you can even look at the neural activity in response to every click from a speaker and see the information (activity) go up and down!  The rational expectation is that you’d need this information to make a decision, right?  Well, when he goes and inactivates the region there is no effect on the behavior.  The other region he records from is the FOF which is responsible orienting the head (say, in the direction of the right decision).  The neural activity here looks like a binary signal of ‘turn left’ or ‘turn right’.  Inactivating this area just prior to the decision interferes with the ability to make a proper decision so the information is certainly being used here, though only as an output.  Where the information is being integrated and sent from, though, is not clear; it’s apparently not the PPC (and then maybe not LIP)!

Kenway Louie

The second good talk was from a member of Paul Glimcher’s lab, Kenway Louie.  He was interested in the question of why we make worse decisions when given more choices.  Although he wanted to talk about value, he used a visual task as a proxy and explainer.  Let’s say you have two noisy options where you weren’t certain which option was better; if the options are noisy but very distinct, it is easy to decide which one you want. However, if they are noisy and closer together in value it becomes harder and harder to distinguish them both behaviorally and as a matter of signal detection.

But now let’s add in a third object.  It also has some noisy value, but you only have to make a decision between the first two.  Should be easy right?  Let’s add in some neuroscience: in the brain, one common way to represent the world is ‘divisive normalization’.  Basically, the firing of a neuron is normalized by the activity of the other neurons in the region.  So now that we’ve added in the third option, the firing of the neurons representing the value of the other two objects goes down.  My notes were unfortunately…not great… so this is where I get a bit confused, because what I remember thinking doesn’t make total sense on reflection.  But anyway: this normalization interferes with the probability distributions of the two options making it more difficult to make the right choice, although it is nonlinear and the human behavior matches nicely (I think).  The paper is in press so hopefully I can report on it soon…

Paul Schrater

Paul Schrater gave a talk that was a mix of decision-making and foraging.  His first and main point was that many of the things that we refer to having value are in fact secondary sources for value; money only has value inasmuch as it can buy things that are first-order needs such as food or movies or such.  However, the same amount of money cannot always buy the same amount of goods so value is really an inference problem, and he claims it can of course be explained through Bayesian inference.

His next important point is that we really need to think about decision-making as a process.  We are in a location, in which we must perform actions which have values and must fulfill some need states which, of course, influence our choice of location.  Thinking about the decision-process as this loop makes us realize we need to have an answer to the stopping problem or, how long should I stay in a location before I leave to another location?  The answer in economics tends to come from the answer to the Secretary Problem (how many secretaries should I interview before I just hire one?) and the answer in ecology comes from Optimal Foraging; in fact, both of these rely on measuring the expected mean value of the next option and both of these are wrong.  We can instead think of the whole question as a question of learning and get an answer by reinforcement learning.  Then when we stop relies not just on the mean expected reward but also the variance and other higher-order statistics.  And how do humans do when tested in these situations?  They rely on not just mean but also variance!  And they fit quite closely to the reinforcement learning approach.

He also talked about the distractor problem that Kenway Louie discussed, but my notes here don’t make much sense and I’m afraid I don’t remember what his answer was…