Your eyes are your own

Retinal mosaic

This blows my mind. There is a technique that allows us to map the distribution of cones in a person’s eyes. You would think that there is some consistency from individual to individual, or that it would be distributed in some predictable manner but! No!

What happens when you show each of these people a flash of color and ask them to just give us a name? Something like you’d expect – the person who only seems to have red cones seems to name just about everything either red or white. Those with a broader distribution of cones give a broader distribution of colors. You can even predict the colors that someone will name based solely on the tiling of these cones.color naming

And none of these people are technically color blind! What kind of a world is BS seeing??

Reference

Brainard, D., Williams, D., & Hofer, H. (2008). Trichromatic reconstruction from the interleaved cone mosaic: Bayesian model and the color appearance of small spots Journal of Vision, 8 (5) DOI: 10.1167/8.5.15

Illusions are life

OiM3Jop

Just adding the right combination of grey and white crosses really screws things up, doesn’t it? It seems likely that the illusion comes from the perceived illumination (it probably helps that these are essentially Gabors).

There’s a nice reminder in Science this week that we are not the only animals subject to illusions – here is one in yeast (from the abstract):

We systematically monitored growth of yeast cells under various frequencies of oscillating osmotic stress. Growth was severely inhibited at a particular resonance frequency, at which cells show hyperactivated transcriptional stress responses. This behavior represents a sensory misperception—the cells incorrectly interpret oscillations as a staircase of ever-increasing osmolarity. The misperception results from the capacity of the osmolarity-sensing kinase network to retrigger with sequential osmotic stresses. Although this feature is critical for coping with natural challenges—like continually increasing osmolarity—it results in a tradeoff of fragility to non-natural oscillatory inputs that match the retriggering time.

In other words, a very non-natural stimulus – a periodic change in salt concentration – leads the yeast to instead ‘see’ a constant increase in the concentration. Pretty cool.

(via Kyle Hill)

Facts about color vision

From an article by Ed Yong (remember, rods ~ night vision, cones ~ color daytime vision):

In 1913, American zoologists Horatio H. Newman and J. Thomas Patterson wrote, “The eyes [of the nine-banded armadillo] are rudimentary and practically useless. If disturbed an armadillo will charge off in a straight line and is as apt to run into a tree trunk as to avoid it…”

A wide range of animals, including many birds, fish, reptiles, and amphibians, have eyes with four types of cones, allowing them to discriminate between a huge range of colours. Mammals, however, evolved from a nocturnal ancestor that had already lost two of its cones, and many have stuck with this impoverished set-up. Dogs, for example, still only have two cones: one tuned to violet-ish colours and another tuned to greenish-blue. (Contrary to popular belief, a dog’s world isn’t black-and-white; they see colours, albeit a limited palette.)

Humans and other primates partly reversed the ancient loss by reinventing a third red-sensitive cone, which may have helped us to discern unripe green fruits from ripe red/orange ones. Ocean-going mammals, meanwhile, took the loss of cones even further and disposed of their blue/violet-sensitive ones. And the great whales have lost all their cones entirely. They only have rods. The ocean is blue, but a blue whale would never know…there are even people who have rod-only vision—they do well in all but brilliant sunlight, and have sharp enough vision to read in normal light. (Then again, Emerling says that this condition is sometimes called “day blindness”, and that “it’s frequently painful for these individuals to keep their eyes open during the day.”)

There are tons of other great little facts about the vision of different animals in the articles.

21st century advances in art: optical illusions

4-Expanding_heart

Never let it be said that science has contributed nothing to art! The study of optical illusions not only gives us crazy cool images to look at, but tells us about who we are and how we function in the world. Contemplate that.

I somehow forgot to link to the 2014 Optical Illusions finalists, which is apparently a thing, but there you are. There are some pretty cool optical illusions in there.

Of course, you could just watch the new OK Go music video instead, which is one long set of optical illusions.

Business Insider has an explanation of how many of the illusions work and made us some pretty GIFs while they were at it. Go read!

two heads illusion

 

Clay Reid & The Brain Institute

Sounds like a band name, huh? As I jet off to Cosyne, this article seemed appropriate:

As an undergraduate at Yale, he majored in physics and philosophy and in mathematics, but in the end decided he didn’t want to be a physicist. Biology was attractive, but he was worried enough about his mathematical bent to talk to one of his philosophy professors about concerns that biology would too fuzzy for him.

The professor had some advice. “You really should read Hubel and Wiesel,” he said, referring to David Hubel and Torsten Wiesel, who had just won the Nobel Prize in 1981 for their work showing how binocular vision develops in the brain…

“Torsten once said to me, ‘You know, Clay, science is not an intelligence test.’ ”Though he didn’t recall that specific comment, Dr. Wiesel said recently that it sounded like something he would have said. “I think there are a lot of smart people who never make it in science. Why is it? What is it that is required in addition?”…

He is studying only one part of one animal’s brain, but, he said, the cortex — the part of the mammalian brain where all this calculation goes on — is something of a general purpose computer. So the rules for one process could explain other processes, like hearing. And the rules for decision-making could apply to many more complicated situations in more complicated brains. Perhaps the mouse visual cortex can be a kind of Rosetta stone for the brain’s code.

It’s a fun read about the goals of Clay Reid and of the Brain Institute as a whole. I’m always dubious about using the visual system as a model for anything subcortical, and for implicitly assuming that non-cortex is less important than cortex. And what about long-time scale modulation? But for all that, they’re doing pretty cool stuff up there.

 

Learning to see through semantics

Humans have a visual bias: everything in vision seems easy and natural to us, and it can seem a bit of a mystery why computers are so bad at it. But there is a reason such a massive chunk (about 30%) of cortex is devoted to it. It’s really hard! To do everything that it needs to, the brain splits up the stream of visual information into a few different streams. One of these streams, which goes down the ventral (purple, above) portion of the brain, is linked to object recognition and representing abstract forms.

For companies like Facebook or Google, copying this would be something of a holy grail. Think how much better image search would be if you could properly pull out what objects are in the image. As it is, though, these things are fairly hard.

Jon Shlens recently visited from Google and gave a talk about their recent research on improving the search (which I see will be presented as a poster at NIPS this week). In order to extract abstract form, they decided, they must find a way to abstract the concept of each image. There is one really obvious way to do this: use words. Semantic space is rich and very easily trainable (and something Google has ample practice at).

Shlens filters

First, they want a way to do things very quickly. One way to get at the structure of an image is to use different ‘filters’ that represent underlying properties of the image. When moved across an image, the combination of these filters can reconstruct the image and identify what are the important underlying components. Unfortunately, these comparisons go relatively slowly over many, many dot products. Instead, they just choose a few points on the filters to compare (left) which improves performance without a loss of sensitivity.

Once they can do that quickly, they train a deep-learning artificial neural network (ANN) on the images to try to classify them. This does okay. The fancy-pants part is where they also train an ANN on words in Wikipedia. This gives them relationships between all sorts of words and puts the words in an underlying continuous space. Now words have a ‘distance’ between them that tells how similar they are.

ANN guess

By combining the word data with the visual data, they get a ~83% improvement in performance. More importantly, even when the system is wrong it is only kind of wrong. Look at the sample above: on the left are the guesses of the combined semantic-visual engine and on the right is the vision-only guesser. With vision-only, guesses vary widely for the same object: a punching bag, a whistle, a bassoon, and a letter opener may all be long straight objects but they’re not exactly in the same class of things. On the other hand, an English horn, an oboe and a bassoon are pretty similar (good guesses); even a hand is similar in that it is used for an instrument. Clearly the semantic-visual engine can understand the class of object it is looking at even if it can’t get the precise word 100% of the time. This engine does very well on unseen data and scales very well across many labels.

This all makes me wonder: what other sensory modalities could they add? It’s Google, so potentially they could be crawling data from a ‘link-space’ representation. In animals we could add auditory and mechanosensory (touch) input. And does this mean that the study of vision is missing something? Could animals have a sort of ‘semantic’ representation of the world in order to better understand visual or other sensory information? Perhaps multimodal integration is actually the key to understanding our senses.

References

Frome A, Corrado GS, Shlens J, Bengio S, Dean J, Ranzato M, & Mikolov T (2013). DeViSE: A Deep Visual-Semantic Embedding Model NIPS

Dean T, Ruzon MA, Segal M, Shlens J, Vijayanarasimhan S, & Yagnik J (2013). Fast, Accurate Detection of 100,000 Object Classes on a Single Machine Proceedings of IEEE Conference on Computer Vision and Pattern Recognition DOI: 10.1109/CVPR.2013.237