I spent a week recently in Salt Lake City at the Cosyne (COmputational and SYstems NEuroscience); people had told me that it was their favorite conference, and now I understand why. Other attendees have put up their reactions, so I figure it’s about time I got off the couch and did the same.
Probably the biggest effect this meeting had on me is that I started using twitter a bit more seriously – follow me at @neuroecology – and participated in my first “tweet-up” (is that really a thing?). There are lots of great neuroscientists tweeting though there should be more!
For simplicity of organization, there will be three posts on Cosyne: one on a few decision-making talks, one on The Big Talks, and one on neural correlates of foraging.
On decision-making, the first (and longest) talk was by Carlos Brody. His talk was focused on the question of how we make decisions in noisy environments. In this case, rats had to sit around listening to two speakers emit clicks at random (poisson) intervals and decide which speaker, left or right, was clicking more. We typically think of the way animals make these types of decisions is as a ‘noisy integrator‘: each point of information – each click – is added up with some noise thrown in there because we’re imperfect, forgetful, and the environment (and our minds!) are full of noise. The decision is then made when enough information has been accumulated that the animal can be confident in going one direction or another.
One small problem with this is that there are a lot of models that are consistent with the behavioral data. How noisy is the internal mind? Is it noisy at all? How forgetful are we? That sort of thing. The Brody lab fit the data to many models and found that the one that most accurately describes the observed behavior is a slow accumulator that is leaky (ie a bit forgetful) but where the only noise is from the sensory input! Actually, I have in my notes that is ‘lossless’ but also that it is ‘leaky’ so I’m not sure which of the two is accurate, but the important detail is that once the information is in the brain it gets computed on perfectly and our integration is noiseless; all the noise in the system comes from the sensory world.
They also recorded from two areas in the rat brain, the posterior parietal cortex (PPC) and the frontal orienting fields (FOF). The PPC is an area analogous to LIP where neural activity looks like it is integrating information; you can even look at the neural activity in response to every click from a speaker and see the information (activity) go up and down! The rational expectation is that you’d need this information to make a decision, right? Well, when he goes and inactivates the region there is no effect on the behavior. The other region he records from is the FOF which is responsible orienting the head (say, in the direction of the right decision). The neural activity here looks like a binary signal of ‘turn left’ or ‘turn right’. Inactivating this area just prior to the decision interferes with the ability to make a proper decision so the information is certainly being used here, though only as an output. Where the information is being integrated and sent from, though, is not clear; it’s apparently not the PPC (and then maybe not LIP)!
The second good talk was from a member of Paul Glimcher’s lab, Kenway Louie. He was interested in the question of why we make worse decisions when given more choices. Although he wanted to talk about value, he used a visual task as a proxy and explainer. Let’s say you have two noisy options where you weren’t certain which option was better; if the options are noisy but very distinct, it is easy to decide which one you want. However, if they are noisy and closer together in value it becomes harder and harder to distinguish them both behaviorally and as a matter of signal detection.
But now let’s add in a third object. It also has some noisy value, but you only have to make a decision between the first two. Should be easy right? Let’s add in some neuroscience: in the brain, one common way to represent the world is ‘divisive normalization’. Basically, the firing of a neuron is normalized by the activity of the other neurons in the region. So now that we’ve added in the third option, the firing of the neurons representing the value of the other two objects goes down. My notes were unfortunately…not great… so this is where I get a bit confused, because what I remember thinking doesn’t make total sense on reflection. But anyway: this normalization interferes with the probability distributions of the two options making it more difficult to make the right choice, although it is nonlinear and the human behavior matches nicely (I think). The paper is in press so hopefully I can report on it soon…
Paul Schrater gave a talk that was a mix of decision-making and foraging. His first and main point was that many of the things that we refer to having value are in fact secondary sources for value; money only has value inasmuch as it can buy things that are first-order needs such as food or movies or such. However, the same amount of money cannot always buy the same amount of goods so value is really an inference problem, and he claims it can of course be explained through Bayesian inference.
His next important point is that we really need to think about decision-making as a process. We are in a location, in which we must perform actions which have values and must fulfill some need states which, of course, influence our choice of location. Thinking about the decision-process as this loop makes us realize we need to have an answer to the stopping problem or, how long should I stay in a location before I leave to another location? The answer in economics tends to come from the answer to the Secretary Problem (how many secretaries should I interview before I just hire one?) and the answer in ecology comes from Optimal Foraging; in fact, both of these rely on measuring the expected mean value of the next option and both of these are wrong. We can instead think of the whole question as a question of learning and get an answer by reinforcement learning. Then when we stop relies not just on the mean expected reward but also the variance and other higher-order statistics. And how do humans do when tested in these situations? They rely on not just mean but also variance! And they fit quite closely to the reinforcement learning approach.
He also talked about the distractor problem that Kenway Louie discussed, but my notes here don’t make much sense and I’m afraid I don’t remember what his answer was…