Can you believe it is December already?
On the blog
I addressed the difference between ‘learning socially’ and ‘social learning’. I find the idea that behavior can adapt without any direct learning occurring to be quite fascinating.
Finally, I discovered that I am either a liar or a crackpot.
David Dobbs wrote an article titled ‘Die, selfish gene, die’. My impression was that the idea that he was attack was not the one that is presented in the book The Selfish Gene, but rather some kind of strawman. In fairness, Dobbs backtracked on his twitter pretty quickly and clarified his point by restricting it quite heavily. While the writing is undeniably great, the article itself is fairly misleading which leads me to wonder whether that makes it good or bad science writing? I was going to post on it, but my fear got the better of me and I decided I didn’t know enough evolutionary theory to feel comfortable attacking the article. Jerry Coyne does the job that I wanted to do, with more authority though probably also more venom.
See 100 years of breed ‘improvement’ (pictured above)
The average grade given at Harvard is an A, and one professor is ‘bravely’ fighting that grade inflation by giving a second private, meaningless, grade.
The role of genes in learning across species; a great new blog!
We are seriously defunding the NIH
A shepherd has 120 sheep and 5 dogs. How old is he? When kids are taught to a test, the problem-solving skills they develop are not necessarily the ones that you want them to.
This is a synaptic vesicle! It’s crazy busy
The Stanford Prison Experiment is widely cited, but remember that the average is not the whole: individual participants reacted in very distinct ways.
No, humans are not chimp-pig hybrids. Apparently that was a possibility?
Why do I always wake up 5 minutes before my alarm goes off? I find that I can do this on trips even when it is not my normal waking-time.
Upping your theory game. Something that neuroscientists really need to do.
In the journals
A multiplicative reinforcement learning model capturing learning dynamics and interindividual variability in mice (pubmed)
Optogenetic activation of an inhibitory network enhances feedforward functional connectivity in auditory cortex (doi)
Dietary choice behavior in Caenorhabditis elegans (doi)
Your weekly image:
Your weekly tweet:
I thought that this was funny:
I figured I could try some other variations:
Humans have a visual bias: everything in vision seems easy and natural to us, and it can seem a bit of a mystery why computers are so bad at it. But there is a reason such a massive chunk (about 30%) of cortex is devoted to it. It’s really hard! To do everything that it needs to, the brain splits up the stream of visual information into a few different streams. One of these streams, which goes down the ventral (purple, above) portion of the brain, is linked to object recognition and representing abstract forms.
For companies like Facebook or Google, copying this would be something of a holy grail. Think how much better image search would be if you could properly pull out what objects are in the image. As it is, though, these things are fairly hard.
Jon Shlens recently visited from Google and gave a talk about their recent research on improving the search (which I see will be presented as a poster at NIPS this week). In order to extract abstract form, they decided, they must find a way to abstract the concept of each image. There is one really obvious way to do this: use words. Semantic space is rich and very easily trainable (and something Google has ample practice at).
First, they want a way to do things very quickly. One way to get at the structure of an image is to use different ‘filters’ that represent underlying properties of the image. When moved across an image, the combination of these filters can reconstruct the image and identify what are the important underlying components. Unfortunately, these comparisons go relatively slowly over many, many dot products. Instead, they just choose a few points on the filters to compare (left) which improves performance without a loss of sensitivity.
Once they can do that quickly, they train a deep-learning artificial neural network (ANN) on the images to try to classify them. This does okay. The fancy-pants part is where they also train an ANN on words in Wikipedia. This gives them relationships between all sorts of words and puts the words in an underlying continuous space. Now words have a ‘distance’ between them that tells how similar they are.
By combining the word data with the visual data, they get a ~83% improvement in performance. More importantly, even when the system is wrong it is only kind of wrong. Look at the sample above: on the left are the guesses of the combined semantic-visual engine and on the right is the vision-only guesser. With vision-only, guesses vary widely for the same object: a punching bag, a whistle, a bassoon, and a letter opener may all be long straight objects but they’re not exactly in the same class of things. On the other hand, an English horn, an oboe and a bassoon are pretty similar (good guesses); even a hand is similar in that it is used for an instrument. Clearly the semantic-visual engine can understand the class of object it is looking at even if it can’t get the precise word 100% of the time. This engine does very well on unseen data and scales very well across many labels.
This all makes me wonder: what other sensory modalities could they add? It’s Google, so potentially they could be crawling data from a ‘link-space’ representation. In animals we could add auditory and mechanosensory (touch) input. And does this mean that the study of vision is missing something? Could animals have a sort of ‘semantic’ representation of the world in order to better understand visual or other sensory information? Perhaps multimodal integration is actually the key to understanding our senses.
Frome A, Corrado GS, Shlens J, Bengio S, Dean J, Ranzato M, & Mikolov T (2013). DeViSE: A Deep Visual-Semantic Embedding Model NIPS
Dean T, Ruzon MA, Segal M, Shlens J, Vijayanarasimhan S, & Yagnik J (2013). Fast, Accurate Detection of 100,000 Object Classes on a Single Machine Proceedings of IEEE Conference on Computer Vision and Pattern Recognition DOI: 10.1109/CVPR.2013.237
Dan Ariely asks what makes us feel good about our work. Here’s a hint as to what it’s not: nasty reviewers (grumble, grumble).
Something light to think about as we head into Thanksgiving!
Neuroscience is a field both new and broad. It has roots in psychology, cognition, molecular biology, psychophysics, and more. Although there are some (slightly self-serving) attempts at defining what the open questions are, the sheer diversity of the field lends itself to many possible questions. The answer for those interested in psychology, in economic behavior, in ecology, in vision, in molecular biology, will all be different.
Regardless of field: what are the most important open questions in your field?
Feel free to respond in the comments, on twitter, or on your blog.
On the blog
I’ve been trying to find links to neuroscience, and economics/biology/ecology, resources for people who want to hear something serious. Neuro.tv and the Stanford NeuroTalk podcast are both good, and I recently found this set of excellent talks from the NIH.
I discussed how ‘wise’ crowds are, and when that wisdom might fail. It is actually not straight-forward when you should listen to what other people have to say, despite what some in the Economics field thinks.
Whether or not neuroscience is ready for open, ultra-collaborative work is a big question. There are a couple of projects that would probably qualify for that title right now – those being Open Source Brain and OpenWorm - but the more I think about it the more I find that there is no good consensus of what are the big ‘open questions’ in neuroscience. I am beginning to wonder if a resource for collaboration over the internet might be a valuable product…
Because it is known. Prophecy Sciences wants to use neuroscience to improve hiring/sports.
Dear traveller: please don’t think ill of us. We are the last generation. And we are immortal.
Probably a bad idea. Let’s not give this kid a billion dollars.
Let’s do all of it. Doing the things that you’re not supposed to do with Google glass. Can I say that I for one don’t understand the glass hate?
If you stay there too long, you won’t be the same. Are the Andes the most rapidly evolving place on the planet?
I can guess which numbers are even with 95% confidence. Why brains are not computers (ed: except maybe they are, they’re just doing different inference).
Because you want people to understand your stuff. On why engineers and scientists should be worried about color. Very worried.
In the journals
The intrinsic dimensionality of plant traits and its relevance to community ecology. DOI: 10.1111/1365-2745.12187
Toward a neural basis for social behavior. DOI: 10.1016/j.neuron.2013.10.038
Symmetry in hot-to-cold and cold-to-hot valuation gaps. DOI: 10.1177/0956797613502362
Spatial memory and animal movement. DOI: 10.1111/ele.12165
Your weekly image (the newly discovered ‘plant hopper’):
Your weekly tweet:
(Ignore this: 3YG65VYNY7MK )