Science and Halloween

[image via]

Well Halloween has brought me tricks in the form of a devastating cold that has left me unable to do much over the past few days beyond sleep and watch silly Bollywood movies/depressing NBA games/Whit Stillman movies about disco (incidentally, this movie could be remade ad verbatim about contemporary hipster/blogger/EDM culture). As I am unable to bring you any coherent new content, here are some links about science and Halloween:

Official SFN bloggers

In case you missed it, SFN released their usual bizarre list of official bloggers who will be doing the yeoman’s work of reporting to you on all the excitement that occurs at the conference1. As per usual, several of the bloggers – including the first one on the list! – have only posted maybe twice in their life (this is not a new problem). Who is on this selection committee? How exactly are they deciding who to include?! It’s a shame and a waste of potential community building, one that risks a perception of the blogging community as not worth bothering with.

But maybe they are just choosing exciting new bloggers with a bright future! What has happened with similar young fellows from previous years? I had a hard time finding the previous lists – feel free to add them in the comments – but Neurodojo mentioned a few from 2011. Well, one of the blogs flat-out doesn’t exist anymore, and the other two never posted again. Good job, guys2!

Luckily, most of the bloggers they chose are quite good. Here is the list, with twitter handles:

Footnotes:

1. This is not sour grapes or anything, I didn’t think to apply

2. In all fairness, the neuroflocks idea is a good one and I wish it had kept up. Perhaps we could curate something like that again?

Reinforcement Learning and Decision Making (RLDM) 2013

I have just returned from the Reinforcement Learning and Decision Making (RLDM) conference and it was…quite the learning experience. As a neuroscientist, I tend to only read other scientific papers written by neuroscientists so it is easy to forget how big the Reinforcement Learning community really is. The talks were split pretty evenly between the neuroscience/psychology community and the machine learning/CS community, with a smattering of other people (including someone who works on forestry problems to find the optimal response to forest fires!). All in all, it was incredibly productive and I learned a ton about the machine learning side of things while meeting great people.

I think my favorite fact that I learned was from Tom Stafford, which is that there is a color called the ‘tritan line’ which is visible to visual cortex but not to certain other visual areas (specifically the superior colliculus). Just the idea that there is a color invisible to certain visual area is…bizarre and awesome. The paper he presented is discussed on his blog here.

There were a few standout talks.

Joe Kable gave a discussion of the infamous marshmallow task, where a young child is asked to not eat a marshmallow while the adult leaves the room for some indeterminate amount of time. It turns out that if the child believes the adult’s returning time is distributed in a Gaussian fashion then it makes sense to wait but if the returning time follows a heavy-tailed distribution then it makes sense to eat the marshmallow. This is because the predicted amount of time until the adult returns increases as time passes for a heavy-tailed function. And indeed, if you ask subjects to do a delay task they act as if the distribution of delay times are heavy-tailed. See his paper here.

Yin Li used monkeys to ask how an animal’s learning rate changes depending on the situation. There is no one optimal learning rate: it depends on the situation. If you are in an environment where you a tracking a target with little noise until sudden dramatic changes (small variance in between sudden changes in mean), then you want a high learning rate; you are not at risk of being overly responsive to the internal variability of the signal while it is stationary On the other hand, if there is a very noisy signal whose mean does not change much, then you want a low learning rate. When a monkey is given a task like this, it does about as well as a Bayesian-optimal model. I’m not sure which one he used, though I think this is a problem that has gotten attention in vision (see Wark et al and DeWeese & Zador). Anyway, when they try to fit a bog-standard Reinforcement Learning model it cannot fit the data. This riled up the CS people in the audience who suggested that something called “adaptive learning RL” could have fit the data, a technique I am not aware of? Although Li’s point was that the basic RL algorithm is insufficient to explain behavior, it also highlights the lack of crosstalk between the two RL kingdoms.

Michael Littman gave an absolutely fantastic talk asking how multiple agents should coordinate their behavior. If you use RL, one possibility is just to treat other agents as randomly moving objects…but “that’s a bit autistic”, as Littman put it. Instead, you can do something like minimax or maximin. Then you just need to find the Nash equilibrium! Unfortunately this doesn’t always converge to the correct answer, there can be multiple equilibria, and it requires access to the other agent’s value. Littman suggested that side payments can solve a lot of these problems (I think someone was paging Coase).

Finally, the amazing Alex Kacelnik gave a fascinating talk about parasitism in birds, particularly cuckoos. It turns out that when you take into account costs of eggs and such, it might actually be beneficial to the host to raise 1-2 parasite eggs; at least, it’s not straight forward that killing the parasites is the optimal decision. Anne Churchland asked whether neurons in the posterior parietal cortex of rats show mixed sensory and decision signals, and then showed that they are orthogonal on the level of the population. Paul Phillips gave a very lucid talk detailing the history of dopamine and TD learning. Tom Dietterich showed how reinforcement learning is being used by the government to make decisions for fire and invasive-species control. And Pieter Abbeel showed robots! See, for instance, the PR2 Willow Garage fetching beer (other videos):

Here are some other robots he mentioned.

Some final thoughts:

1. CS people are interested in convergence proofs, etc. But in the end, a lot of their talks were really just them acting as engineers trying to get things to work in the real world. That’s not that far from what psychologists and neuroscientists are doing: trying to figure out why things are working the way that they are.

2. In that spirit, someone in psych/neuro needs to take the leading-edge of what CS people are doing and apply it to human/animal models of decision-making. I’ve never heard of Adaptive Reinforcement Learning; what else is there?

3. At the outset, it would help if they could make it clear what are the open research questions for each field. At the end, maybe there could be some discussion on how to get the fields to collaborate more.

4. Invite some economists! They have this whole thing called Decision Theory… and would have a lot to contribute.

 

Risk aversion

[This post is a stub that will be expanded as time goes on and I learn more, or figure out how to present the question better.]

Humans, and many animals, tend to like predictability. When things get crazy, chaotic, unpredictable – we tend to avoid those things. This is called risk aversion: preferring safe, predictable outcomes to unpredictables ones.

Take the choice between a guaranteed $1,000,000 or a 10% chance of $10,000,000 with a 90% chance of nothing at all. How many people would choose the riskier option? Very few, it turns out. We aren’t always risk-averse. When animals search for food, they tend to prefer safer areas to riskier ones until they start getting exceptionally peckish. Once starving, animals are often risk-seeking, and are willing to go to great lengths for the chance to get food.

Why are we risk-averse? There are a few reasons. First off, unpredictability means that the information we have about our environment is not as useful, and possibly downright wrong. On the other hand, it may just come from experience. Imagine that you are given the choice between two boxes, each of which will give a reward when opened, and rewards are reset when closed. One of these boxes will give you lots of rewards sometimes, and no rewards the rest of the time, while the other box will always give you a little reward. Over the long run the two boxes will give you the same amount of reward but when you start opening them up? You are likely to have a dry run from the risky box. Whenever you get no reward from a box, you feel more inclined to open the safer box. This gives you a nice little reward! So now you like this box a little better. Maybe you think it’s a good idea to peak in the risky box now? Ah, foiled again, that box sucks, better stick with the safe box that you know.

This is the basic logic behind the Reinforcement Learning model of risk-aversion as characterized by Yael Niv in 2002 (does anyone know an older reference?).

See also: Ellsberg Paradox, Prospect Theory

Economics may be a science, but it is not one of the sciences

(Begin poorly-thought-out post:)

Raj Chetty wrote an article for the New York Times that has been being passed around the economics blogosphere on why economics is a science:

What kind of science, people wondered, bestows its most distinguished honor on scholars with opposing ideas? “They should make these politically balanced awards in physics, chemistry and medicine, too,” the Duke sociologist Kieran Healy wrote sardonically on Twitter.

But the headline-grabbing differences between the findings of these Nobel laureates are less significant than the profound agreement in their scientific approach to economic questions, which is characterized by formulating and testing precise hypotheses. I’m troubled by the sense among skeptics that disagreements about the answers to certain questions suggest that economics is a confused discipline, a fake science whose findings cannot be a useful basis for making policy decisions.

He goes on to argue, strangely, that economics is a science because it is now primarily empirical. I’m not particularly interested in the argument of who is a “real” science – when I did physics, I remember people liked to make fun of biology as not a real “hard” science, etc.

But I spend a lot of time talking to people across the scientific spectrum – physicists, biologists, psychologists, economists. And economists are consistently the outlier in what they think about and who they reference. They are simply not a part of the broader natural sciences community. Look at the interdisciplinary connectivity between fields in the picture above. There is a clear cluster in the center of social studies and a largely separate ring of the natural sciences. Here’s another way of viewing it:

No matter how you slice it, economics is just not part of the natural sciences community. It’s starting to edge there, with some hesitant links to neuroscience and genomics, but it’s not there yet. I find it all a bit baffling. Why has economics separated itself so much from the rest of the natural sciences?

Richard Thaler on behavioral economics and nudges

Since the Nobel Prize committee decided to honor the rationality of the markets (or lack thereof), here’s a well-timed interview with behavioral economist RIchard Thaler:

Region: It’s hard to summarize the field, but you’ve written that there are three characteristics that differentiate Homo economicus from Homo sapiens: bounded rationality, bounded self-interest and bounded self-control.

Thaler: Those are the three things that—in the terminology Cass Sunstein and I use in our book Nudge.—distinguish humans from “econs,” short for Homo economicus. But I’ve now added a fourth “bound” that we also need in order to have behavioral economics: bounded markets.

If you had asked me in 1980 to say which field do you think you have your best shot at affecting, finance would have been the least likely, essentially because of the arguments that Becker’s making: The stakes are really high, and you don’t survive very long if you’re a trader who loses money.

Region: And you found that investors overreacted to both good and bad news; also, they were overconfident in their investing ability. The implication was that market prices weren’t always right. In other words, markets weren’t necessarily efficient, in contradiction to the efficient market hypothesis (EMH). Then in 2001, with Owen Lamont, you studied equity carve-outs and found more evidence that markets aren’t good at estimating fundamental value.

Thaler: Yes. Those papers highlight the two aspects of the efficient market hypothesis that I sometimes call the “no free lunch” part and the “price is right” part.
De Bondt and Thaler, “Does the Stock Market Overreact?” was about the no- free-lunch argument. When we were writing that paper in the early ’80s, it was generally thought by economists that the one thing we knew for sure is that you can’t predict future stock prices from past stock prices.

He goes on and talks about his work with the British government putting in successful ‘nudges’ and his relationship with Fama (they sit in mirror opposite offices at Chicago). He points out that when behavioral economics started with ‘bounded rationality’, a lot of the criticism was that it didn’t appear consistently or at the macro level. If you can’t aggregate the behavior, who cares? Well the more we investigate, the more important it turns out to be. I think neuroeconomics is in a similar stage – I’m not sure many economists really care, yet, because it will take time to figure out how to aggregate it. I wish I knew what Thaler thought about neuroeconomics. Anyone have a link to remarks of his on the topic?

Here’s an interview with Shiller who is teaming up with Akerlof to write a book about manipulation and deception in markets.

Nobel Prize in Economics: Fama, Hansen, and Shiller (link round-up)

The winners of this year’s pseudo-Nobel Prize in Economics are Fama, Hansen and Shiller. Marginal Revolution has a good series on what Fama did, what Hansen did, and what Shiller did. Hansen’s work is the hardest to understand in that it is basically stats. Here are more explanations.

Shiller of course had a lot of commentary on the recent bubble and crash which you should read.

I have nothing useful to add that is not in these links.

Why is there no neuroscience blogosphere? (Updated)

aka Why does the neuroscience blogosphere suck?

Obviously, there are tons of great neuroscience blogs out there – I’m not even going to try to list them because they are numerous and I don’t want to accidentally leave one out. But there does not seem to be a blogosphere. To get all middle school on you, Wikipedia defines the blogosphere as the collection of all blogs and their interconnections, implying that they exist as a connected community.

When I look around at the Economics blogosphere, I see a lot of give-and-take between blogs. One blog will post an idea, another blog will comment on it, and the collective community has a discussion. I see this discussion, to a greater or lesser extent, in the other communities I follow: math, physics, and ecology. Yet missing in all this is neuroscience, and perhaps biology in general. Why is this?

The online academic biology community seems primarily interested in discussing the disastrous state of the profession. This set of problems – the lack of funding, the overabundance of PhDs, etc – has a clearly connected blogosphere. There’s lots of discussion.

Are biologists just less interested in discussing broad ideas? I wouldn’t think so, but I don’t see any equivalent to, say, Dynamic Ecology, where discussions on neuroscience ideas big and small can kick off. I think the closest we get is the Neuroskeptic/critic axis.

Am I missing something? Is there a place that big ideas in neuroscience get debated on blogs? Is there a scientific give and take that I’m missing? Is neuroscience too diverse, or too data oriented?

Update: Okay, I’ve been thinking about this and there have been some really great comments. I think I’m won over by one on G+ and Artem’s below. I think there are x key factors:

(1) Too much science communication, not enough science debate. People in the biology blogs seem to want to be science communicators! It’s much easier to do this in a popular field like neuroscience than, say, math. And these bloggers who attempt communication get much more positive feedback than the bloggers who attempt to communicate with the tiny neuroscience blogosphere. I know that my post on Einstein’s brain got orders of magnitude more views than my post on Tony Movshon explaining V2.

(2) Few blogs are focused on individual research themes. It often seems that the most successful blogs devoted to a more academic audience are those with clear research themes (aka, find your niche). But we have almost none of these in neuroscience! I think a lot of this follows from (1). We have blogs like labrigger and Memming, but where are the rest? Visual neuroscience often seems to take up half of the SfN space, but where are the vision blogs?

(3) The blogging community is not used to it. Maybe part of it is that we’re more used to the passive meeting presentation format than the more useful symposia (debate) format, but I think the biology community is not used to this kind of debate over ideas and that uncomfortability has carried over into the blogs. I know when I started taking snips from other blogs and commenting on it I felt…uncomfortable, but it’s something I see all the time in economics/etc.

(4) Data is hard. Let’s just admit to ourselves that biology is more data focused than, say, economics. Economics is very easy to have a semi-informed opinion on than biology.

The model species

At Molecular Ecologist, Jacob Tennessen asks whether people are the unsung model species of molecular ecology:

Non-invasive genetics and “natural experiments” are employed to make inferences about evolutionary history, behavior, fitness, and other aspects of natural history. These same restrictions also apply to humans: breeding humans in the lab is as ethically fraught as it is logistically challenging. But, the difference between studying humans and, say, elephant seals is that the established knowledge base for humans is much greater. The combined size of available population genetic datasets in humans is a billion-fold larger than for most species, even some that have already been the target of molecular ecology studies, and these human data are much better annotated and validated…

So, what are the most important things we have learned from studying our own molecular ecology? Perhaps the primary lesson from human population genetics is that intergroup differences that seemed substantial to our subjective brains, like between Africans and Europeans, turned out to be minor. There are few if any fixed autosomal differences between continental groups, and the phenotypic markers we are inclined to use, like skin color, are encoded by some of the most divergent loci, making them a poor proxy for overall evolutionary distance. A related major lesson is the surprising ubiquity of “soft sweeps,” or positive selection acting on standing variation. Unlike the classic model of a newly arisen mutation rising quickly to fixation, most geographically local adaptation in humans comes from more subtle changes in the frequency of existing alleles, hence the dearth of fixed differences. A third lesson is that the most genetically diverse human populations are found in our ancestral homeland in sub-Saharan Africa, with basal populations such as the San showing particularly high polymorphism.

These are all excellent and under-appreciated points. In cognitive neuroscience, is there any better model organism than the human brain? One of the limiting factors in incorporating genetics into human neuroscience is the paucity of relevant biological data. We know, roughly, that certain SNPs in genes like DRD4 or SERT can change dopamine or serotonin function, kind of, but it’s very non-specific, and regulation of individuals genes varies across brain region. It’s difficult but I’m highly optimistic about the future of humans as a model organism in molecular neuroscience!

I also wonder whether the lessons of evolutionary ecology are sufficiently well-known among theorists of neural function, especially the influence of ‘soft sweeps’. I get the sense that the answer is definitely no.

More importantly, though, five points to gryffindor for the first comment: ‘the speaker (Lawrence Hurst, I think) started with “humans are an excellent model system for understanding Drosophila’.