Americans: outliers among outliers

Researchers found that Americans perceive the line with the ends feathered outward (B) as being longer than the line with the arrow tips (A). San foragers of the Kalahari, on the other hand, were more likely to see the lines as they are: equal in length. Subjects from more than a dozen cultures were tested, and Americans were at the far end of the distribution—seeing the illusion more dramatically than all others.

More recently psychologists had challenged the universality of research done in the 1950s by pioneering social psychologist Solomon Asch. Asch had discovered that test subjects were often willing to make incorrect judgments on simple perception tests to conform with group pressure. When the test was performed across 17 societies, however, it turned out that group pressure had a range of influence. Americans were again at the far end of the scale, in this case showing the least tendency to conform to group belief.

It is not just our Western habits and cultural preferences that are different from the rest of the world, it appears. The very way we think about ourselves and others—and even the way we perceive reality—makes us distinct from other humans on the planet, not to mention from the vast majority of our ancestors. Among Westerners, the data showed that Americans were often the most unusual, leading the researchers to conclude that “American participants are exceptional even within the unusual population of Westerners—outliers among outliers.”


Unrelated to all that, 03/21 edition

Note: I now post these in my twitter feed first, so check it if you’re super bored!

Monarch butterflies are disappearing.  I can’t think of anything more important or more underreported.

Okay, we’re kind of biased.  Every good bayesian knows ‘not being biased’ often means you’re just not being explicit about your assumptions.  Economists get explicit.

Sharks: now in groups, more terrifying.  Sharks hunt in groups and can learn from each other, too.

I guess that will do the trick.  Pandas have trouble mating and sometimes need a bit of… guidance.

Things were always more fun in the olden days.  On Victorians getting monkeys drunk and hungover, for science.

At least there’s only 20.  Advice for if you want a faculty job, told with a straight face.

As a theorist, I only use fake data.  Apparently in Physics you sometime get given fake data to see what you do with it, going so far as to not tell you until you’re about to submit a paper; I think if this happened to me I’d cry.

Kawaii psychophysics.  Because cats can see optical illusions.

Emperor Tamarins, always the swankiest monkeys.  Some monkeys sit in mud to cool down and relax, only to realize they can no longer recognize any of their friends; vicious fights ensure.

Image ALL the neurons!

So you want to image every neuron in the brain of a vertebrate?  What kind of crazy man are you?  Misha B. Ahrens, that’s who.

In what can only be described as a “crazy awesome” experiment, Ahrens used a technique that’s been recently emerging called light sheet microscopy to image the activity of (nearly) every neuron in the zebrafish brain simultaneously.  The whole method faces a slight problem in that neurons are active on the order of milliseconds whereas Ahrens can only image the whole brain every 1.3 seconds.  Still, it seems reasonable that most behaviorally relevant activity can be captured at that speed.  A larger problem is that they might be activating neural activity by shining light at the zebrafish’s eyes.

This is only a methods paper which I’m guessing means that they’re presenting a ‘weak’ result, with a super awesome result to come in 6-12 months in Nature.  Their weak result still showcases the power of their method: by looking at single unit activity, they are able to find previously unknown coupling across different regions as well as specific subpopulations that are linked in distinct ways.

Awesome things are coming from the Ahrens lab.  I foresee it.


Ahrens, M., & Keller, P. (2013). Whole-brain functional imaging at cellular resolution using light-sheet microscopy Nature Methods DOI: 10.1038/nmeth.2434

Neuroscience is useful: NBA edition

Antoine Walker

Although I wasn’t able to attend it, Yonatan Loewenstein apparently gave a talk at a Cosyne workshop about decision-making and related it to NBA players.  I was curious to find the paper and while ultimately I could not, I did find that he had a different one that was interesting.  One of the most commonly used methods in neuroscience to model learning is reinforcement learning.  In reinforcement learning, you learn from the consequences of your actions; intuitively, a reward will act to reinforce a behavior.  Although inspired by psychological theories of learning, it has gained support in neuroscience from the patterns of activity of dopamine cells which provide exactly the learning error signal you’d expect.

Basketball is a dynamic game where players are constantly evaluating their chance of a shot, and whether they should pass it and make a 2 or 3 point field goal attempt (FGA).  One of the most contentious issues in basketball (statistics) is the ‘hot hand effect’: if you’ve successfully made a 3 point shot, are you more likely to make the next one?  Maybe it’s just one of those nights where you are on, your form is perfect and every shot will sink.  Problem is, statistically speaking there is no evidence for it.

chanceofmade3ptBut the players sure think that it exists!  Now look at the figure to the right.  Here, the blue line represents how likely a player is to shoot a 3 point field goal if their last (0, 1, 2, 3) shots were made 3 point field goals.  In general, they shoot 3 pointers about ~40% of the time.  If they made their last 3 pointer, they now have a ~50% of shooting a 3 pointer on their next attempt.  And if they make that one?  They have a 55% chance of shooting a 3 pointer.  Similarly, the red line follows the probability of shooting a 3 pointer if you last few shots were missed 3 pointers.

Okay, so basketball players believe in the hot hand, and act like it.  Why do they act like it?  If we take our model of the learning process, Reinforcement Learning, and apply it to the data, we actually get a great prediction of how likely a player is to shoot a 3 pointer!  Our internal machinery that we use for learning the value of an action is also a good model for learning the value of taking a 3 pointer – and shooting a 3 pointer will only reinforce the idea that the next shot for a 3 pointer (get it?)!

Alas, this type of behavior does not help anything; a player who makes a 3 pointer is 6% less likely to make his next his 3 than if he had missed his last 3 pointer.  In fact, if you take our Reinforcement Learning model and see how each player behaves, we can estimate how susceptible that player is to learning.  Some players won’t change how they shoot (unsusceptible) and some players will learn a lot from each shot, with the history of made and missed shots having huge effects on how likely they are to shoot another 3.  And believe it or not, the players that are least susceptible to learning are the ones who get the most points out of each 3 point shot.  Unless you are Antoine Walker, then you will just shoot a lot of bad 3 pointers for the hell of it.

Finding non-existent ‘hidden patterns’ in noise is a natural human phenomenon and is a natural outgrowth of learning from past experiences.  So tell your parents!  Learning: not always good for you.


Neiman, T., & Loewenstein, Y. (2011). Reinforcement learning in professional basketball players Nature Communications, 2 DOI: 10.1038/ncomms1580

Cosyne: Foraging!

I think I have found my people.  The workshops after the main Cosyne meeting were smaller and more focused, and really allowed you to delve into a topic.  I spent the first day at the Neural Mechanisms of Foraging workshop and found myself a bunch of neuroecologists!

I think I’m just going to summarize a bunch of talks instead of any one individually.  I missed the first few minutes of introduction, but I got the impression that this was the first meeting of ‘neuroforagers’ to ever actually take place; Michael Platt called it a “coming out party for foraging”.  Foraging – to define it briefly – is the decision to leave a reward source to explore new options.  It’s apparently a great task for monkeys too – many basic behaviors that we train monkeys to perform can take a long time to train; teaching them to do foraging happens in a single session.  It’s totally natural which is itself a reason for why we should be studying it!

There were two recurring themes at the talks – the anterior cingulate cortex (ACC) is the foraging center and that economics approaches aren’t doing much good.  Talk after talk recorded from the ACC or studied how ACC activity is shaped.  Just like the Basal Ganglia meeting that The Cellular Scale attended, every talk included The Dopamine Slide.  Michael Platt suggested at the end that he hoped at foraging meetings every talk would include a figure from one of his papers that I have now forgotten!  Well, I don’t do ACC so probably not for me anyway.

The other theme was the failure of economic models to explain behavior.  Talk after talk included some variant of, “we tried fitting this to a [temporal discounting/risk-preference/reinforcement-learning/optimal foraging]  model but it didn’t account for the data”.  Almost all of them said that!  The naive assumption that we should move to optimize immediate reward is, somehow, failing.  Some kind of new principle (or perhaps better model-fitting) will be needed to consistently explain actual behavior.

Cosyne: The Big Talks

Besides the great stuff on decision-making, the other part of the main meeting I wanted to discuss were some of the Big Talks.  This is the place where some of the Big Guns of neuroscience were just doing their thing, talking about neuroscience.

Bill Bialek

Bill Bialek gave a talk with the title, “Are we asking the right questions?”  His first slide appeared with a large “No” and he declared that it wouldn’t be a particularly interesting talk if the answer was Yes, would it?  Unfortunately, Yes was the answer I was hoping for!  I had wanted him to give a deep, introspective talk about the questions we’re asking, the things that are right and wrong about them, and how we can ask better questions.  I’ve actually been wondering the same thing lately with respect to Bialek’s work.  Bill Bialek is a statistical physicist who applies the methods of stat mech to neural systems.  He gets really interesting results about the properties of large neural systems and neural coding, but I’m not quite sure if the answers he gets are relevant to the particular questions I’m interested in asking  For the record, Bialek thought that we should be thinking about predictive coding.  That is, how neurons reflect not information about the environment but rather information that can predict what the environment will be.

Eve Marder

Eve Marder studies the lobster stomatogastric ganglia (STG) which is the neural system that controls the stomach, basically.  It’s a great setup and has yielded tons of interesting results but there wasn’t exactly tons that was new in the talk.  Fortunately, Marder is an excellent lecturer and it was interesting throughout.  The most interesting comment that she made is that they actually know the whole connectome of the STG and have the ability to record from neurons in the system for weeks at a time!  And yet they still don’t know how it all works.  Take that, connectomists.

If you are interested at all in her work or seeing her talk – and you should be – watch this short youtube series on her work.

Terry Sejnowski

Terry Sejnowski gave a talk in two parts.  Strangely, the first part had little to do with his own research and was instead used as a thematic introduction to the second part of his talk.  He spent his time explaining how a camera based on a simple model of the retina – spiking only when it saw a new edge moved across the field of view – was able to naturally accomplish things such as identifying objects that researchers in computer vision have spent decades trying to do, only somewhat successfully.  And emulating the retina naturally endows it with other great features such as amazing temporal precision and extremely low energy usage.  See? Neuroscience is useful.

This led him to the second part of his talk: the Brain Activity Map (BAM).  He started off telling the story of how the NYT article came into being, and why it seemed so sketchy.  Basically: when Obama mentioned brain research in his State of the Union, a member of NIH that knew about the project happened to tweet, “Obama supports the Brain Activity Map!” (or something similar).  From this one little tweet, the reporter’s instincts kicked in – he’d certainly never heard of any “Brain Activity Map” – and, after calling around to his sources, got the scoop on BAM.  Sejnowski was here to finally let the rest of us neuroscientists in on what was going on.

I think a lot of what he talked about has been released in the recent Science perspective, but he certainly was excited about having met Obama… He also said (I think) that the BAM proposal was on a list of ten big science projects, and it beat them out.  The data that will be stored – the activity of thousands or even millions of cells simultaneously – will be enormous.  Microsoft, Google, and Qualcomm were in on the meetings and apparently basically said, “let us deal with that.”  Since the data size is so enormous, the idea will be to have “brain observatories” where the data will reside; the data will be open access and analyses will be done on computers at the ‘observatories’.  That way, no one has to worry about downloading the data sets!

Of course, the thing on everyone’s mind is whether funding for BAM will take away funding from other basic neuroscience research.  When that came up in questions, Eve Marder said that the NIH heads have been discussing it and they want to make sure that there are no reductions in R01s (the ‘basic’ grant given to researchers).  Basically, this is just more money.  If it ever passes (to quote Sejnowski: “the hope is that both Democrats and Republicans have brains”).

Again, the important point here is that Sejnowski was really, really excited and it was kind of adorable.

Tony Movshon

Ah, Tiny Movshon (as my iphone kept trying to autocorrect).  This was by far my favorite talk of the meeting, mostly because Movshon basically trolled all the rodent vision people in the audience.  He gave a great, contradictory 45 minute talk about how if you’re not doing primate vision, you’re wasting your time.  Okay, he really said that the mouse visual system is too different from the primate one to be of any use.  Because it’s too evolutionarily far away.  Except the cat visual system is great!  Even though the cat is even more evolutionarily distant.  But whatever, his solution to the problem of not being able to do genetics in monkeys and needing a replacement for mice is – wait for it – the mouse lemur!  Obviously.  Here is your future cuddly neuroscience overlord:

Unfortunately, I don’t think we really know anything about the mouse lemur yet, but that shouldn’t stop us from replacing all our mice with this cute little guy!

Anyway, at the end of the talk it was like someone disturbed a bees’ nest.  All the rodent vision people were visibly distressed; would it be mean to say that as a lonely invertebrate person it was a nice bit of schadenfreude?


Cosyne: Decision-making

I spent a week recently in Salt Lake City at the Cosyne (COmputational and SYstems NEuroscience); people had told me that it was their favorite conference, and now I understand why.  Other attendees have put up their reactions, so I figure it’s about time I got off the couch and did the same.

Probably the biggest effect this meeting had on me is that I started using twitter a bit more seriously – follow me at @neuroecology – and participated in my first “tweet-up” (is that really a thing?).  There are lots of great neuroscientists tweeting though there should be more!

For simplicity of organization, there will be three posts on Cosyne: one on a few decision-making talks, one on The Big Talks, and one on neural correlates of foraging.

Carlos Brody

On decision-making, the first (and longest) talk was by Carlos Brody.  His talk was focused on the question of how we make decisions in noisy environments.  In this case, rats had to sit around listening to two speakers emit clicks at random (poisson) intervals and decide which speaker, left or right, was clicking more.  We typically think of the way animals make these types of decisions is as a ‘noisy integrator‘: each point of information – each click – is added up with some noise thrown in there because we’re imperfect, forgetful, and the environment (and our minds!) are full of noise.  The decision is then made when enough information has been accumulated that the animal can be confident in going one direction or another.

One small problem with this is that there are a lot of models that are consistent with the behavioral data.  How noisy is the internal mind?  Is it noisy at all?  How forgetful are we?  That sort of thing.  The Brody lab fit the data to many models and found that the one that most accurately describes the observed behavior is a slow accumulator that is leaky (ie a bit forgetful) but where the only noise is from the sensory input!  Actually, I have in my notes that is ‘lossless’ but also that it is ‘leaky’ so I’m not sure which of the two is accurate, but the important detail is that once the information is in the brain it gets computed on perfectly and our integration is noiseless; all the noise in the system comes from the sensory world.

They also recorded from two areas in the rat brain, the posterior parietal cortex (PPC) and the frontal orienting fields (FOF).  The PPC is an area analogous to LIP where neural activity looks like it is integrating information; you can even look at the neural activity in response to every click from a speaker and see the information (activity) go up and down!  The rational expectation is that you’d need this information to make a decision, right?  Well, when he goes and inactivates the region there is no effect on the behavior.  The other region he records from is the FOF which is responsible orienting the head (say, in the direction of the right decision).  The neural activity here looks like a binary signal of ‘turn left’ or ‘turn right’.  Inactivating this area just prior to the decision interferes with the ability to make a proper decision so the information is certainly being used here, though only as an output.  Where the information is being integrated and sent from, though, is not clear; it’s apparently not the PPC (and then maybe not LIP)!

Kenway Louie

The second good talk was from a member of Paul Glimcher’s lab, Kenway Louie.  He was interested in the question of why we make worse decisions when given more choices.  Although he wanted to talk about value, he used a visual task as a proxy and explainer.  Let’s say you have two noisy options where you weren’t certain which option was better; if the options are noisy but very distinct, it is easy to decide which one you want. However, if they are noisy and closer together in value it becomes harder and harder to distinguish them both behaviorally and as a matter of signal detection.

But now let’s add in a third object.  It also has some noisy value, but you only have to make a decision between the first two.  Should be easy right?  Let’s add in some neuroscience: in the brain, one common way to represent the world is ‘divisive normalization’.  Basically, the firing of a neuron is normalized by the activity of the other neurons in the region.  So now that we’ve added in the third option, the firing of the neurons representing the value of the other two objects goes down.  My notes were unfortunately…not great… so this is where I get a bit confused, because what I remember thinking doesn’t make total sense on reflection.  But anyway: this normalization interferes with the probability distributions of the two options making it more difficult to make the right choice, although it is nonlinear and the human behavior matches nicely (I think).  The paper is in press so hopefully I can report on it soon…

Paul Schrater

Paul Schrater gave a talk that was a mix of decision-making and foraging.  His first and main point was that many of the things that we refer to having value are in fact secondary sources for value; money only has value inasmuch as it can buy things that are first-order needs such as food or movies or such.  However, the same amount of money cannot always buy the same amount of goods so value is really an inference problem, and he claims it can of course be explained through Bayesian inference.

His next important point is that we really need to think about decision-making as a process.  We are in a location, in which we must perform actions which have values and must fulfill some need states which, of course, influence our choice of location.  Thinking about the decision-process as this loop makes us realize we need to have an answer to the stopping problem or, how long should I stay in a location before I leave to another location?  The answer in economics tends to come from the answer to the Secretary Problem (how many secretaries should I interview before I just hire one?) and the answer in ecology comes from Optimal Foraging; in fact, both of these rely on measuring the expected mean value of the next option and both of these are wrong.  We can instead think of the whole question as a question of learning and get an answer by reinforcement learning.  Then when we stop relies not just on the mean expected reward but also the variance and other higher-order statistics.  And how do humans do when tested in these situations?  They rely on not just mean but also variance!  And they fit quite closely to the reinforcement learning approach.

He also talked about the distractor problem that Kenway Louie discussed, but my notes here don’t make much sense and I’m afraid I don’t remember what his answer was…