Where do people look? Where there’s information

where do people look animation

1. BusinessInsider has a great collection of pictures tracking where people actually look when they see an image. (Big takeaway: men love to look at other people’s groins.)

2. Watch the video above: people generally look at the face of the person talking or the object that someone is pointing at. Why? Because that is where the information resides.

information seeking

3. If you ask someone to look for a hidden target, they will look around in a manner that will give them the most information about where the target may be – this lets them exclude as many locations as possible.

4. But we are social animals, and social animals have a tendency to rely on social information – gathering information from other individuals lets you pass on some of the cost of finding it to others. Humans in crowds will look where other humans are looking – what is going on over there? is it something important? why are so many people looking?

5. Of course, humans in crowds are also wary. Other social creatures are potential threats: you want to look someone else’s face to determine whether they are friendly or not. Many animals (like peacocks!) do this – look at where predators might be hiding. Because what information is more important?

6. One way that the nervous system accomplishes this is by internal reward: it is ‘enjoyable’ to look at social faces (and the more relevant, the more rewarding).

7. Famously, dogs will look at where people are looking while cats will not; one has evolved to understand this social information while the other has not. Which says a lot about the psychology of a cat!

References
Najemnik, J., & Geisler, W. (2005). Optimal eye movement strategies in visual search Nature, 434 (7031), 387-391 DOI: 10.1038/nature03390
Gallup AC, Hale JJ, Sumpter DJ, Garnier S, Kacelnik A, Krebs JR, & Couzin ID (2012). Visual attention and the acquisition of information in human crowds. Proceedings of the National Academy of Sciences of the United States of America, 109 (19), 7245-50 PMID: 22529369
Gallup AC, Chong A, Kacelnik A, Krebs JR, & Couzin ID (2014). The influence of emotional facial expressions on gaze-following in grouped and solitary pedestrians. Scientific reports, 4 PMID: 25052060
Watson KK, & Platt ML (2012). Social signals in primate orbitofrontal cortex. Current biology : CB, 22 (23), 2268-73 PMID: 23122847
Yorzinski JL, & Platt ML (2014). Selective attention in peacocks during predator detection. Animal cognition, 17 (3), 767-77 PMID: 24253451

The man who asked the simplest question

“Claude Shannon answered a question that no one else was even asking.”

This is a nice little video essay on Claude Shannon; even as someone bathed in information theory day in, day out, I found it interesting. Sadly, it ends with a standard #einsteincomplex.

If you haven’t read it yet, James Gleick’s The Information is well worth reading… or at least the first three quarters is. As important as Shannon was, it’s worth remembering what Hamming had to say about him:

When you are famous it is hard to work on small problems. This is what did Shannon in. After information theory, what do you do for an encore? The great scientists often make this error. They fail to continue to plant the little acorns from which the mighty oak trees grow. They try to get the big thing right off. And that isn’t the way things go…

When you go to a new field, you have to start over as a baby. You are no longer the big mukity muk and you can start back there and you can start planting those acorns which will become the giant oaks. Shannon, I believe, ruined himself. In fact when he left Bell Labs, I said, “That’s the end of Shannon’s scientific career.” I received a lot of flak from my friends who said that Shannon was just as smart as ever. I said, “Yes, he’ll be just as smart, but that’s the end of his scientific career,” and I truly believe it was.

via kottke

Information theory of behavior

Biology can tell us what but theory tells us why. There is a new issue of Current Opinion in Neurobiology that focuses on the theory and computation in neuroscience. There’s tons of great stuff there, from learning and memory to the meaning of a spike to the structure of circuitry. I have an article in this issue and even made the cover illustration! It’s that tiny picture to the left; for some reason I can’t find a larger version but oh well…

Our article is “Information theory of adaptation in neurons, behavior, and mood“. Here’s how it starts:

Recently Stephen Hawking cautioned against efforts to contact aliens [1], such as by beaming songs into space, saying: “We only have to look at ourselves to see how intelligent life might develop into something we wouldn’t want to meet.” Although one might wonder why we should ascribe the characteristics of human behavior to aliens, it is plausible that the rules of behavior are not arbitrary but might be general enough to not depend on the underlying biological substrate. Specifically, recent theories posit that the rules of behavior should follow the same fundamental principle of acquiring information about the state of environment in order to make the best decisions based on partial data

Bam! Aliens. Anyway, it is an opinion piece where we try to push the idea that behavior can be seen as an information-maximization strategy. Many people have quite successfully pushed the idea that sensory neurons are trying to maximize their information about the environment so that they can represent it as well as possible. We suggest that maybe it makes sense to extend that up the hierarchy of biology. After all, people generally hate uncertainty, a low information environment, because it is hard to predict what is going to happen next.

Here is an unblocked copy of the article for those who don’t have access.

References

Sharpee, T., Calhoun, A., & Chalasani, S. (2014). Information theory of adaptation in neurons, behavior, and mood Current Opinion in Neurobiology, 25, 47-53 DOI: 10.1016/j.conb.2013.11.007

Explaining the structure of science through information theory

tl;dr I propose a half-baked and  totally wrong theory of the philosophy of science based on information theory that explains why some fields are more data oriented, and some more theory oriented

Which is better: data or theory? Here’s a better question: why do some people do theory, and some people analyze data? Or rather, why do some fields tend to have a large set of theorists, and some don’t?

If we look at any individual scientist, we could reasonably say that they were trying to understand world as well as possible as well as possible. We could describe this in information theory terms: they are trying to maximize the information they have about that description, when given some set of data. One way to think about information is that it reduces uncertainty. In other words, when given a set of data we want to reduce our uncertainty about our description of the world as much as possible. When you have no information about something, you are totally uncertain about it. You know nothing! But the more information you have, the less uncertain you are. How do we do that?

Thanks to Shannon, we have an equation that tells us how much information two things share. In other words, how much will knowing one thing tell us about the other:

I(problem; data) = H(problem) – H(problem | data)

This tells you (in bits!) how much certain you will be about a problem or our description of the world if you get some set of data.

H(problem) is the entropy function; it tells us how many different possibilities we have of describing this problem. Is there only one way? Many possible ways? Similarly, H(problem | data) is how many possible ways we have of describing the problem if we’ve seen some data. If we see data, and there are still tons of possibilities, the data has not told us much; we won’t have much information about the problem. But if the data is so precise that for each set of data we know exactly how to describe the problem, then we will have a lot of information!

This tells us that there are two ways to maximize our information about our problem if we have a set of data. We can either increase our set of descriptions of the problem or we can decrease how many possible ways there are to describe the problem when we see data.

In a high-quality, data-rich world we can mostly get away with the second one: the data isn’t noisy, and will tell us what it represents. Information can simply be maximized by collecting more data. But what happens when the data is really noisy? Collecting more data gives us a smaller marginal improvement in information than working on the set of descriptions – modeling and theory.

This explains why some fields have more theory than others. One of the hopes of Big Data is that it will reduce the noise in the data, shifting fields to focusing on the H(problem|data) part of the equation. On the other hand, the data in economics, honestly, kind of sucks. It’s dirty and noisy and we can’t even agree on what it’s telling us. Hence, marginal improvements come by creating new theory!

Look at the history of physics; for a long time, physics was getting high-quality data and had a lot of experimentalists. Since, oh, the 50s or so it’s been getting progressively harder to get new, good data. Hence, theorists!

Biology, too, has such an explosion of data it’s hard to know what to do with it. If you put your mind to it, it’s surprisingly easy to get good data that tells you about something.

Theory: proposes possible alternatives for how the world could work [H(problem)]; data limits H(problem | data). Problem is data itself is noisy.