Trading on information

From Rajiv Sethi:

As shown by Aumann, two individuals who are commonly known to be rational, and who share a common prior belief about the likelihood of an event, cannot agree to disagree no matter how different their private information might be. That is, they can disagree only if this disagreement is itself not common knowledge. But the willingness of two risk-averse parties to enter opposite sides of a bet requires them to agree to disagree, and hence trade between risk-averse individuals with common priors is impossible if they are commonly known to be rational.

This may sound like an obscure and irrelevant result, since we see an enormous amount of trading in asset markets, but I find it immensely clarifying. It means that in thinking about trading we have to allow for either departures from (common knowledge of) rationality, or we have to drop the common prior hypothesis. And these two directions lead to different models of trading, with different and testable empirical predictions…

In a paper that I have discussed previously on this blog, Kirilenko, Kyle, Samadi and Tuzun have used transaction level data from the S&P 500 E-Mini futures market to partition accounts into a small set of groups, thus mapping out an “ecosystem” in which different categories of traders “occupy quite distinct, albeit overlapping, positions.”

One of our most striking findings is that 86% of traders, accounting for 52% of volume, never change the direction of their exposure even once. A further 25% of volume comes from 8% of traders who are strongly biased in one direction or the other. A handful of arbitrageurs account for another 14% of volume, leaving just 6% of accounts and 8% of volume associated with individuals who are unbiased in the sense that they are willing to take directional positions on either side of the market. This suggests to us that information finds its way into prices largely through the activities of traders who are biased in one direction or another, and differ with respect to their interpretations of public information rather than their differential access to private information

If there’s a message in all this, it is that markets aggregate not just information, but also fundamentally irreconcilable perspectives.

I always find it shocking that economists would find it reasonable to assume that traders would have access to the same information or have the same priors. At the least, people have a strong path dependence: what you believe now is a function of what you believed ten minutes ago. And if we believe reinforcement-learning models – which I think it is obvious that we should – an individual’s history of payoffs and costs, however random, will contribute to a heterogeneous distribution of beliefs.

Of course, information sharing has been considered in ecology for a long time. In fact, information transfer between individuals is a kind of hot field right now: look into the literature on flocking and group dynamics for examples where researchers are trying to extract what and how information is transferred between individuals. A recent example, Iain Couzin’s whole lab, etc.

That being said, this is fantastic economic-level data.

Unrelated to all that, 9/27(ish) edition

Okay, I’m finally back from traveling and appeasing the various authorities in my life. That means I’ll be back to my regularly-scheduled programming. Meanwhile, links!

From the blog

David Hubel passed away, which was sad. But then Colin Camerer won a genius prize which is happy, but does not take away the sadness.

On other blogs

Is it fun to be a professor? Apparently, yes.

How to make better (cleaner) figures. A link to another link roundup, but the first item is too good to let go

Can math explain history? And a Q&A. This is something that I’d love to comment on but don’t feel like I have the historical background knowledge for any proper critique.

What it’s like to attend one of those spam conferences you (or, at least, I) get emails about.

On error bars.

The Emperor Gladwell is naked. Don’t click the link but do the google news method first!

These are the most cited papers in behavioral economics.

Tamarins whisper when they think they’re being overheard

Are male and female brains different, or are male brains just bigger (because males are bigger)?

In the journals

Interaction rules underlying group decisions in homing pigeons

Goats favor personal over social information in an experimental foraging task

On the sister blog

Bach was a thug and Mozart was pretty crude

A truly majestic animal

Context, people, context!

Some recent photo finds

Neuroeconomist Colin Camerer wins MacArthur genius award

The MacArthur genius fellows for 2013 were just announced and one winner was neuroeconomist Colin Camerer:

Colin Camerer came to Caltech in 1994 with an MBA in quantitative studies and a doctorate in decision theory from the University of Chicago’s business school, a place he described as “the temple of beliefs in highly rational people who make really good decisions and take into account the future.”…

Those questions have led to pioneering research into how the brain works while making decisions about such things as whether to participate in an economic “bubble,” when prices accelerate. Last week, Camerer published a study that suggests participants in a bubble – such as the increase in housing prices – are not reckless or rash, but have a highly developed “theory of mind” that allows them to consider what others would do, and use that to guide their risky decisions.

The study was not published in the major economics journals, but in Neuron, a highly regarded neuroscience journal. That, said Camerer, was validation enough.

Any reader of this blog knows that I am a firm believer that neuroscience will have a lot to say to the field of economics so it should be unsurprising that I am excited that Colin Camerer won this award. Here are some papers of his that are worth reading.

The other neuroscientist who won a genius award was Sheila Nirenberg who I have admired for a while. Here is a TED talk where she describes the work she has done on retinal prosthetics. That is great but I have to admit that every time I hear about her I am reminded of a series of hilarious rebuttal papers between her and Bill Bialek (and back again!) about the proper way to use information theory to decode neural responses. They are just arguing in circles about whether it is more proper to look at encoding or decoding strategies, but you really get a sense of anger from the papers.

C. Kevin Boyce is an ecologist who should also be mentioned though I don’t know anything about his work! Here is the description from the Nature blog post: “a paleobotanist at Stanford University in California, examines extinct and living plants to link ancient and present-day ecosystems. He has deduced that the evolution of flowering plants influenced the water cycle in the ancient tropics, giving rise to the rainfall patterns and rich biodiversity characteristic of modern rainforests.” Sounds cool to me.

 

David Hubel, 1926 – 2013

Word comes that David Hubel passed away last night. A nobel laureate who studied the visual system, he was a legend in many ways.

First and foremost are his investigations into the basic representations of the world in the visual cortex. It was known prior to his (and, importantly, Torsten Wiesel) experiments that neurons in the retina respond to ON/OFF changes in light intensity at a specific part of the visual field – an area referred to as that neuron’s receptive field. In the illustration above, an ON-center cell responds best to a bright spot surrounded by a dark spot while an OFF-center cell responds best to the inverse light pattern. These cells are excellent at finding the edges of the visual world.

Hubel expected the visual cortex would contain neurons that responded in the same way. As happens so often in science, the visual cortex was uncooperative with his pet theory and did not respond to light and dark patches. Famously, it was only an accident that he discovered that visual cortical neurons respond to moving patches of light and dark! In particular, the cortical neurons will respond to precisely-oriented edges of light; a bar place in an identical location but rotated ninety degrees will elicit no response at all from the cell! Hubel and Wiesel found two distinct types of cells (although there is a contemporary debate as to whether these categories actually are distinct). They were classified into “simple” and “complex” cells. Whereas simple cells will only respond to a bar at a precise angle at a precise location, complex cells will respond to bars of a specific orientation located anywhere within the receptive field of the cell.

Hubel and Wiesel proposed a simple model of how these cells build up their behavior from the retinal cells that they (indirectly) receive input from. Simple cells gather input from a line of ON/OFF retinal cells while complex cells gather input from a collection of simple cells. Here is their original drawing:

On top you can see the proposed pooling of the ON/OFF receptive fields to form a line, and on bottom you can see the pooling of simple cells to enable an invariant response to lines regardless of location. Beautiful and simple! It is my understanding that much of this has been shown to be pretty much on the mark (though I have not paid attention to this subfield in years).

Hubel’s research extended so much further in its characterization of the visual system, but I’m not going to go into that. Rather, it is worth reading the speech he gave upon receiving his Nobel Prize. It is enjoyable and surprisingly gripping! As someone who knows the research inside and out, I still found a lot to learn from it. There are some great historical tidbits including this:

Many of the ideas about cortical function then in circulation seem in retro- spect almost outrageous. One has only to remember the talk of “suppressor strips”, reverberating circuits. or electrical field effects. This last notion was taken so seriously that no less a figure than our laureate-colleague Roger Sperry had had to put it to rest, in 1955, by dicing up the cortex with mica plates to insulate the subdivisions, and by skewering it with tantalum wire to short out the fields, neither of which procedures seriously impaired cortical function (7, 8). Nevertheless the idea of ephaptic interactions was slow to die out. There were even doubts as to the existence of topographic representation, which was viewed by some as a kind of artifact.

Hubel’s work was absolutely fundamental to our current understanding of cortical function. Something like 10% of all searches for ‘neuroscience’ on pubmed are related to vision, which is a shockingly high number for a field studying the whole nervous system. It goes without saying that Hubel’s work is the progenitor of much of this work and what enabled the visual system to be the model system of cortical function that it is today.

If you want to get the chills and see real science done, watch this movie of Hubel and Wiesel mapping out receptive fields.

Unrelated to all that, 9/20(ish) edition

I’ve been in beautiful Portland, Oregon the past few days for my Grandmother’s 100th (!) birthday. We’ve had several celebratory dinners and brunches; I’m completely exhausted yet my Grandmother keeps on truckin’. At one of the dinners there was a slideshow of pictures from her life and it’s almost unimaginable how much she’s seen; she grew up in rural Alberta (her mother at one point had a pet bear, no joke), and she remembers seeing the troops marching off to war in World War I and listening to the King’s Speech.

All of the cousins on that side of the family gathered together in one place for the first time in twenty years – we have lived pretty far apart, from all across the US to Canada (Vancouver) to Mexico to South Korea to the UK. Seeing everyone as adults for the first time, it’s striking how many similarities there are between children who have lived in pretty distinct families and environments. It makes me think there might be something to this whole genetics thing…

Anyway, here are your links for last week:

On the blog

I discussed some success from neuroscience, and why, though ‘bumpology’ in the popular press may not be informative, the science is useful to scientists.

I also posted a link to a classroom experiment designed to illustrate how trade may have developed. This is a great example of how economics should progress – experimentally – especially when one considers how sophisticated economic thought has been historically.

Additionally, there was a great paper that was recently published on how oxytocin can regulate social reward. As an addendum, it was pointed out to me on twitter that the serotonin receptor under discussion can sometimes be found on the postsynaptic glutamatergic (excitatory) cells; this is something I want to look into more.

Finally, I stirred up a bit of a fuss by accusing Gary Marcus of hating computational neuroscience. I’ll admit to being in a bit of an…ornery mood the morning I wrote that, and Gary pointed out that he, in fact, does not hate computational neuroscience. I think we have a difference of opinion on precisely what is an impressive advance in AI and what neuroscience has contributed to it. I’m working on a post going into that in more detail, so look out for that. I meant to try to have more of a discussion with him on it, but I’ve been surprised by how much I’ve had to do in lab and with my relatives…

On other blogs

Digging into the details, the Hot Hand does exist but it really sucks

The case of the disappearing teaspoons! An article from pubmed.

A freshwater flea, magnified

This has always been my favorite Calvin and Hobbes

How does fMRI work?

Is it possible to recover from a setback in academia like this?

Yelp for journals!

Medium seems like it’s just Kuro5hin remade (everything old is new again)

Dogs are perfectly happy to socialize with robots

Why the paradox of choice might be a myth

In the journals

Individual personalities predict social behaviour in wild networks of great tits (Parus major)

Nectar thieves influence reproductive fitness by altering behavior of nectar robbers [Mostly just an excellent title]

Sex differences in the influence of social context, salient social stimulation, and amphetamine on ultrasonic vocalizations in male and female prairie voles

Nucleus accumbens response to gains in reputation for the self relative to gains for others predicts social media use

Surprised at all the entropy: hippocampal, caudate and midbrain contributions to learning from prediction errors

On the sister blog

Tel Aviv has a fantastic street art scene, but my favorite work is from Broken Fingaz

Death metal robots!  That is all.

There are some bizarre and awesome subreddits out there, and these are a few that I would like you to know about

I found a really cool gif illustrating how a motor works, go look at it and be learned

What is happening in the Great Plains is tragic and scary

Dali and Bunuel made a movie. It was weird.

Why does Gary Marcus hate computational neuroscience?

OK, this story on the BRAIN Initiative in the New Yorker is pretty weird:

To progress, we need to learn how to combine the insights of molecular biochemistry…with the study of computation and cognition… (Though some dream of eliminating psychology from the discussion altogether, no neuroscientist has ever shown that we can understand the mind without psychology and cognitive science.)

Who, exactly, has suggested eliminating psychology from the study of neuroscience? Anyone? And then there’s this misleading paragraph:

The most important goal, in my view, is buried in the middle of the list at No. 5, which seeks to link human behavior with the activity of neurons. This is more daunting than it seems: scientists have yet to even figure out how the relatively simple, three-hundred-and-two-neuron circuitry of the C. Elegans worm works, in part because there are so many possible interactions that can take place between sets of neurons. A human brain, by contrast, contains approximately eighty-six billion neurons.

As a C. elegans researcher, I have to say: it’s true there’s a lot we don’t know about worm behavior! There’s also not quite as many worm behavioralists as there are, say, human behavioralists. But there is a lot that we do know. We know full circuits for several behaviors, and with the tools that we have now that numbers going to explode over the next few years.

But then we learn that, whatever else, Gary Marcus really doesn’t like the work that computational neuroscientists have done to advance their tools and models:

Perhaps the least compelling aspect of the report is one of its justifications for why we should invest in neuroscience in the first place: “The BRAIN Initiative is likely to have practical economic benefits in the areas of artificial intelligence and ‘smart’ machines.” This seems unrealistic in the short- and perhaps even medium-term: we still know too little about the brain’s logical processes to mine them for intelligent machines. At least for now, advances in artificial intelligence tend to come from computer science (driven by its longstanding interest in practical tools for efficient information processing), and occasionally from psychology and linguistics (for their insights into the dynamics of thought and language).

Interestingly, he gives his own field, psychology and linguistics, a pass for how much more they’ve done.  So besides, obviously, the study of neural networks, let’s think about what other aspects of AI have been influenced by neuroscience. I’d count deep learning as a bit separate and clearly Google’s pretty excited about that. Algorithms for ICA, a dimensionality reduction method used in machine learning, were influenced by ideas about how the brain uses information (Tony Bell). The role of dopamine and serotonin have contributed to reinforcement learning. Those are just the first things that I can think of off the top of my head (interestingly, almost all of this sprouted out of the lab of Terry Sejnowski.) There have been strong efforts on dimensionality reduction – an important component of machine learning – from many, many labs in computational neuroscience. These all seem important to me; what, exactly, does Gary Marcus want? He doubles down on it in the last paragraph:

There are plenty of reasons to invest in basic neuroscience, even if it takes decades for the field to produce significant advances in artificial intelligence.

What’s up with that? There are even whole companies whose sole purpose is to design better algorithms based on principles from spiking networks. Based on his previous output, he seems dismissive of modern AI (such as deep learning). Artificial intelligence is no longer the symbolism we used to think it was: it’s powerful statistical techniques. We don’t live in the time of Chomskian AI anymore! It’s the era of Norvig. And the modern AI focuses on statistical principles which are highly influenced by ideas neuroscience.

How little we know

In a recent issue of Nature there is a discussion of the history of utility theory:

Three centuries ago, in September 1713, the Swiss mathematician Nikolaus Bernoulli wrote a letter to a fellow mathematician in France, the nobleman Pierre Rémond de Montmort. In it, Bernoulli described an innocent-sounding puzzle about a lottery…The result is surprising. Each product — 1 × ½, 2 × ¼, 4 × ⅛, and so on — is a half. Because the series never ends, given that there is a real, if minute, chance of a very long run of tails before the first head is thrown, infinitely many halves must be summed. Shockingly, the expected win amounts to infinity…

In May 1728, writing from London, the 23-year-old mathematician Gabriel Cramer from Geneva weighed in. “Mathematicians value money in proportion to its quantity, and men of good sense in proportion to the usage that they may make of it.” This was a far-ranging insight. Adding a ducat to a millionaire’s account will not make him happier, Cramer reasoned. The usefulness of an extra coin is never zero, but simply less than that of the previous one — as wealth increases, so does utility, but at a decreasing rate. Assuming that utility increases with the square root of wealth, Cramer recalculated the expected win to be a little over 2.9 ducats.

Daniel encapsulated the probability scenario in a plot of utility versus monetary value, now known as a ‘utility function’ (see ‘Risky business’)… The curve’s diminishing gradient implies that it is always worth paying a premium to avoid a risk. The consequences of this simple graph are enormous. Risk aversion, as expressed in the concave shape of the utility function, tells us that people prefer to receive a smaller but certain amount of money, rather than facing a risky prospect.

It was a bit shocking to me how advanced these concepts were for the year 1700 – and how we haven’t come very far from those insights from the mathematicians of the 18th century. It’s a testament to the lack of experiment in economics that it took until the 1900s for Allais (and his “paradox”) and Kahneman and Tversky‘s theories to come about.

How oxytocin regulates social reward

Why do we care about other people? Not just why do we care for them, but why do we care about – their existence? their presence? what they do and how they make us feel?

For a long time, the canonical explanation has been that the hormone oxytocin is a sort of ‘love hormone’ whereby release causes some sort of bonding between two individuals. This story comes to us from the gerbil-like prairie voles. Prairie voles, you see, are pair-bonders who hook up with one mate for life. They’re so attached to each other that once bonded, males will attack any new female that they see (so much for “love” hormone). Luckily for us scientists, there is another closely related vole that does not pair bond. This made it relatively easy to trace the difference: oxytocin receptors in the nucleus accumbens (NAcc).

The NAcc is an area of the brain that is directly involved in motivation and reward; we tend to think of it as the place where the brain keeps track of how rewarding something is. By acting as a sort of central coordination center for value, it can directly promote physical behaviors. Activating the correct neurons related to reward on the left side of the animal will cause the animal to physically turn to the left.

The bond that prairie voles form is linked to oxytocin receptors in NAcc that change neural activity (and I’m simplifying a bit by neglecting the role of the related hormone vasopressin). This change makes their social (pair-bonded) life more rewarding.

At least, that’s one view. But many animals have a social life that does not involve pair bonding, and often they do not have oxytocin receptors in their NAcc. If oxytocin in NAcc was required for strong social behaviors, if they don’t have the receptors how do they have social behaviors at all?

In what I consider the most exciting paper so far this year, Dölen et al investigate what is the neural circuit that makes social interactions rewarding. Mice are actually social creatures, living in small groups to share parental and defensive responsibilities. Dölen et al exploit this by using a variation on a classic conditioned place preference (CPP) experiment. Mice are placed in one identifiable room with other mice (social); they are then placed in another identifiable room on their own (isolated). When they are finally put in a box with two rooms, one that looks like their social condition and one that looks like the isolated one, they spend much more time in the room that reminded them of their social experience. We tend to think this means they prefer that room because it was somehow more rewarding (or less aversive).

This social conditioning requires oxytocin. Yet, when they delete the oxytocin receptors from cells in NAcc animals still become conditioned. It is only when oxytocin receptors in other areas that project into NAcc do they animals lose any social reinforcement. These receptors are in one specific area, the dorsal raphe nucleus, which is a major source of serotonin in the brain. Interestingly, serotonin is also linked to social behaviors and modification of reward circuitry.

What this suggests is that oxytocin affects reward through serotonin; blockade of certain serotonin receptors in NAcc also abolishes social conditioning. It is not surprising that oxytocin could regulate reward in multiple ways. Serotonin may represent distinct aspects of reward – on different timescales, for instance – than other cells that feed into NAcc. By modulating serotonin instead of NAcc itself, oxytocin can precisely fashion the rewarding effects of social behavior.

As a technical matter, they also propose the receptor that serotonin is acting through (5HT1B). I am under the impression that this is an autoreceptor in NAcc. In other words, it is on the serotonin-emitting cell in order to monitor how much has been released to sculpt the output. By using pharmacology to block the receptor, I worry a bit that they are not getting the receptor which oxytocin is acting through per se but just modifying serotonin release in a gross manner. I feel a little vindicated in this worry by the fact that some of their technical results do not appear to be wholly blocked by 5HT1B blockage.

Reference

Dölen G, Darvishzadeh A, Huang KW, & Malenka RC (2013). Social reward requires coordinated activity of nucleus accumbens oxytocin and serotonin. Nature, 501 (7466), 179-84 PMID: 24025838

How trade develops: thinking in terms of “we”

This is an absolutely fantastic classroom experiment by Bart Wilson:

In the traditional market experiment, the experimenters explain to the participants how to trade. For this experiment that seemed more than a little heavy handed if the question is, what is the process by which exchange “gives occasion,” as Adam Smith says, to discovering the “division of labour”? …Thus the first requirement in building the design was that participants would have to discover specialization and exchange…

The participants choose how much of their daily production time they would like to allocate to producing red and blue items in their field. They are then told, deliberately in the passive voice, that “you earn cash based upon the number of red and blue items that have been moved to your house.” What they have to discover is that not only can they move items to their own house, but that they can move items to other people’s houses…

At one extreme, the economy achieves 88% of the possible wealth above self-sufficiency by the last day[.] And at the other extreme, only 6% of the possible wealth above autarky is realized[…] Why the disparity? These students are immediately engaging their counterparts as part of an inclusive “we”. The same is not true in group 4 [which achieved less wealth].

He then goes into detail on the words and mode of thinking that different groups used to develop the idea of trade and markets. The conclusion is that the development of trade and specialization arises from considering the group and not the individual. And this is in a capitalist society! It is not to say that the only way for trade and specialization to develop is a kind of group-consciousness, and it is not to say that it wouldn’t have developed anyway. But it’s a bit of evidence that it can foster the conditions that make mutually beneficial trade networks increasingly likely.

As a second experiment, I would be interested in how quickly students familiar with the idea and the mathematics would find the optimal solution, and how it would evolve in a ‘noisy’ environment. I’d really like to see more advanced analyses of the text as well, the communication networks that evolve, and how they coordinate the development of the intellectual idea. Is there a tipping point? Is it a steady accumulation towards the optimum? Are there ‘laggards’ that are unconvinced?

But this is a great experiment and a great teacher.

On bumpology: what neuroscience knows

Adam Gopnik has a review in the New Yorker of several neuroscience-themed books. Or perhaps I should say, he has a review of several books that use neuroscience-related words. As he points out, one of the problems with neuroscience happens to be the people who love the sound of neuroscience:

A core objection is that neuroscientific “explanations” of behavior often simply re-state what’s already obvious. Neuro-enthusiasts are always declaring that an MRI of the brain in action demonstrates that some mental state is not just happening but is really, truly, is-so happening. We’ll be informed, say, that when a teen-age boy leafs through the Sports Illustrated swimsuit issue areas in his brain associated with sexual desire light up. Yet asserting that an emotion is really real because you can somehow see it happening in the brain adds nothing to our understanding. Any thought, from Kiss the baby! to Kill the Jews!, must havesome route around the brain. If you couldn’t locate the emotion, or watch it light up in your brain, you’d still be feeling it. Just because you can’t see it doesn’t mean you don’t have it. Satel and Lilienfeld like the term “neuroredundancy” to “denote things we already knew without brain scanning,” mockingly citing a researcher who insists that “brain imaging tells us that post-traumatic stress disorder (PTSD) is a ‘real disorder.’ ” The brain scan, like the word “wired,” adds a false gloss of scientific certainty to what we already thought. As with the old invocation of “culture,” it’s intended simply as an intensifier of the obvious.

It’s always perplexing that you can take a study that shows something “lights up” an area of the brain and the press will ooooh and aaaah over it. The problem is not that it’s bad science – it often isn’t – the problem is that it doesn’t tell the lay-person anything. At all. To a scientist, these studies can be fascinating springboards to further research, or clarity for previous technical research. Take, for instance, two fantastic studies that used electrode recordings to localize two types of uncertainty to the pulvinar and the septum. As a scientist, the results were nontrivial and important for our understanding of how uncertainty is represented and used in the brain. To a non-specialist the takeaway message is: uncertainty exists in two strangely-named areas in the brain…? It’s a prime reason why, despite being important, I don’t highlight those kinds of papers in this blog.

But then Gopnik undermines his point:

She discusses whether the hormone testosterone makes men angry. The answer even to that off-on question is anything but straightforward. Testosterone counts for a lot in making men mad, but so does the “stress” hormone cortisol, along with the “neuromodulator” serotonin, which affects whether the aggression is impulsive or premeditated, and the balance between all these things is affected by “other hormones, other neuromodulators, age and environment.”

Yes, the role of neuromodulators is unfortunately complicated (I’ll let out a personal sigh, here). Yet this example of a seemingly convoluted mechanism is a fantastic example of something we actually know a fair bit about. Serotonin levels actually have a causative role in the rate of aggression – animals that are given a serotonin inhibitor will begin exhibiting less aggression. You can actually map out some of this circuitry in crustaceans to find out exactly how serotonin is acting! In mammals, serotonin seems to have two ways of affecting aggressive behavior. Serotonin release in the striatum will modify dopamine activity, a proxy for the rewarding value of an [aggressive?] action. In other words, serotonin makes being aggressive seem like a less attractive option. Simultaneously, serotonin in the prefrontal cortex inhibits the transmission of signals coming from the amygdala – or, the top-down (self-control) area begins regulating the aggressive signal coming from another area. Complicated? Yes, it can be. But why would you expect the brain to be so simple?

More importantly, there are areas where neuroscience has provided fantastic explanations for how we perceive the physical world. The greatest success in my mind is in the realm of optical illusions. We’ve known about these illusions for a long time but it is only recently that we can give firm physical and neurological explanations for why they occur.  There are many examples out there.

The problem is that once something is explained it is no longer interesting. I think Gopnik is hoping for a concise answer to hard problems – but we’re not there yet.