I love Indian sweets – they’re sugary and buttery and all over delicious. My only problem is, I can never remember what the ones that I like are called; I can usually picture them in my head, but that can make it a bit difficult to order. When I go to an Indian market that sells sweets, I hunt through the display case, looking for that one that I want. How do I decide which one to buy?
Visual information is notoriously difficult to work with. It seems like it should be easy to us because we are, after all, visual creatures. But there’s a lot of information in every image. The visual system in our brain works through it in consecutive steps, essentially going from neurons representing individual pixels to neurons representing people, objects, things. It does this through successive combinations and decorrelations.
Neurons in the retina primarily respond to dots of light: dark spots, light spots, light spots surrounded by dark spots, that sort of thing. This is then passed on to visual cortex where the spots are aligned to create so-called ‘simple cells’, or edge detectors. These neurons look for lines of dark light next to lines of bright light, or vice-versa. In other words, they align individual dark- and light-detectors to see where in a picture things suddenly change.
This progresses in such a manner until we get to a split in the visual stream, with one pathway roughly representing ‘where’ and one pathway roughly representing ‘what’. Information tumbles down the ‘what’ stream and ends up in area IT [inferotemporal cortex], typically thought of as being used for object recognition.
Nicole Rust gave an excellent talk at UCSD yesterday about what happens when things gets to this stage of processing, how IT responds to searched-for images and what happens in the next stage of processing. Neurons in IT represent objects through a population code – there is no real ‘grandmother cell’, but rather a collection of neurons who will combine to represent that grandmother. And when a monkey is searching for an object, the neurons in IT will be able to discriminate between what the animal is searching for and other, non-interesting objects. But this isn’t a linear discrimination. The information is tangled up among the neurons in IT. It’s there, but it might be a little more difficult to get out.
The next stage of the decision-making process is perirhinal cortex, a part of the brain that takes processed information (such as from IT) and uses it for memory-driven tasks. The information on which object to choose is also here, but now it’s easy to get to: you can take the activity of a bunch of neurons in this region and draw a straight line between them and find the patterns that for interesting and uninteresting objects. Yet when when the animal makes a mistake, and chooses something that it wasn’t looking for? It’s all jumbled up in perirhinal cortex.
How does it “dejumble”? Rust & co. suggest that a simple linear-nonlinear model can reproduce most of the important features of this demixing, including the proportion of cells that appear to respond to specific objects. They have a model for this, but what it comes down to is that the transition between IT and perirhinal cortex is essentially doing quadratic discriminant analysis! It’s interesting to ponder why this seems to happen across regions and not within layers in the same region.
If the brain is slowly demixing signals using the same techniques as in machine-learning, is there something to be learned for decision-making in general? Perhaps we need to take into account the algorithms that people are using to understand the data before we can even begin to understand how they make a decision.
Pagan M, Urban LS, Wohl MP, & Rust NC (2013). Signals in inferotemporal and perirhinal cortex suggest an untangling of visual target information. Nature neuroscience, 16 (8), 1132-9 PMID: 23792943
Monsanto is (partially) switching from GMOs to “naturally” grown plants:
The lettuce is sweeter and crunchier than romaine and has the stay-fresh quality of iceberg. The peppers come in miniature, single-serving sizes to reduce leftovers. The broccoli has three times the usual amount of glucoraphanin, a compound that helps boost antioxidant levels…Frescada lettuce, BellaFina peppers, and Beneforté broccoli—cheery brand names trademarked to an all-but-anonymous Monsanto subsidiary called Seminis—are rolling out at supermarkets across the US.
But here’s the twist: The lettuce, peppers, and broccoli—plus a melon and an onion, with a watermelon soon to follow—aren’t genetically modified at all. Monsanto created all these veggies using good old-fashioned crossbreeding…
In 2006, Monsanto developed a machine called a seed chipper that quickly sorts and shaves off widely varying samples of soybean germplasm from seeds. The seed chipper lets researchers scan tiny genetic variations, just a single nucleotide, to figure out if they’ll result in plants with the traits they want—without having to take the time to let a seed grow into a plant. Monsanto computer models can actually predict inheritance patterns, meaning they can tell which desired traits will successfully be passed on. It’s breeding without breeding, plant sex in silico. In the real world, the odds of stacking 20 different characteristics into a single plant are one in 2 trillion. In nature, it can take a millennium. Monsanto can do it in just a few years.
…There they slice open a classic cantaloupe and their own Melorange for comparison. Tolla’s assessment of the conventional variety is scathing. “It’s tastes more like a carrot,” he says. Mills agrees: “It’s firm. It’s sweet, but that’s about it. It’s flat.” I take bites of both too. Compared with the standard cantaloupe, the Melorange tastes supercharged; it’s vibrant, fruity, and ultrasweet. I want seconds
I think this neatly illustrates the silliness of much of the debate between GMOs and natural breeding techniques. One of the interesting facts to come out of this article is the number of GMOs that Monsanto has made that haven’t made it out into the world!
Big agricultural companies say the next revolution on the farm will come from feeding data gathered by tractors and other machinery into computers that tell farmers how to increase their output of crops like corn and soybeans…
The world’s biggest seed company, Monsanto, estimates that data-driven planting advice to farmers could increase world-wide crop production by about $20 billion a year, or about one-third the value of last year’s U.S. corn crop.
The technology could help improve the average corn harvest to more than 200 bushels an acre from the current 160 bushels, companies say. Such a gain would generate an extra $182 an acre in revenue for farmers, based on recent prices. Iowa corn farmers got about $759 an acre last year.
File this under ‘intentional control of our ecology’ and ‘hacking our taste buds’. Next thing you know, they’ll have artificial taste buds…
Trapped on a plane flying to Salt Lake City, I got to thinking about the recent article on how ‘the same brain centers that appreciate art were being activated by beautiful maths‘. In a caffeine-fueled binge, I started righting a purple prose-filled essay on the subject. Clark Ashton Smith would be proud:
Our brain works through a series of chemical messaging systems: payloads of neurotransmitters cross synapses, ions whizz through directly-connected gap junctions, molecular cascades tumble through cells. And on a gross level we have large chunks of grubby grey matter whose fluctuating electrical potentials draw in blood when we see beauty. Yet the phenomenon of beauty is not solely based on the level of blood flow in our brains; rather, it is the precise matrix of neurons and proteins and peptides that are in flux at the right moment that creates our emergent feelings of aesthetics. The beauty of a sunset is not the beauty of literature is not the beauty of an equation, despite what our burbling blood whispers to the thrumming MRI machines.
Anyway, despite the ‘poetics’, the point is real. There is a lot of cynicism among certain in the neuroscience community about the utility of fMRI. This certainly isn’t helped by dead-salmon studies of the ilk that Neuroskeptic or Neurocritic often point out. But that doesn’t mean it isn’t useful! Because of the reward function in science, labs are motivated to oversell their findings – and the media et al help them get away with it, because they don’t really understand what’s going on and like pretty pictures of brains. Yet even when the result is simply finding that some area of the brain ‘lights up’ to some stimulus, that still tells us something about the underlying circuitry, and where to go check for more details.
Sad news that Jack Belliveau has passed away at age 55:
Dr. Belliveau tried a different approach. He had developed a technique to track blood flow, called dynamic susceptibility contrast, using an M.R.I. scanner that took split-second images, faster than was usual at the time. This would become a standard technique for assessing blood perfusion in stroke patients and others, but Dr. Belliveau thought he would try it to spy on a normal brain in the act of thinking or perceiving.
“He went out to RadioShack and bought a strobe light, like you’d see in a disco,” said Dr. Bruce Rosen, director of the Martinos Center and one of Dr. Belliveau’s advisers at the time. “He thought the strobe would help image the visual areas of the brain, where there was a lot of interest.”
Dr. Belliveau took images of the brain while volunteers watched the strobe, then compared those readings with images taken while the strobe was off, subtracting one from the other.
“It didn’t work,” Dr. Rosen said. “He got nothing.”
He tried again, finding new volunteers and this time outfitting them with goggles that displayed a checkerboard pattern. “That did it,” Dr. Rosen said. “The visual areas lighted up beautifully.”
On Nov. 1, 1991, the journal Science published the findings in a paperby Dr. Belliveau and his colleagues at Massachusetts General Hospital.
So young, and to die from complications of gastroenteritis is scary.
fMRI gets a lot of flak because it has a tendency to be oversold. I just wrote an overwrought piece on Medium about that very topic, and why for all the criticisms it is a crucial piece in the neuroscience puzzle (more on that tomorrow.)
Ever since complaining about the fact that half of the Salk faculty candidates this year were from Harvard/MIT, people have been forwarding me their experience.
Harvard: 1/5 from Harvard itself, 3/5 from UCSD
UCSF: Mainly Harvard/Stanford/MIT
University of Washington: Mix; 1 Harvard, 1 UIUC, 1 U Georgia, etc
Rockefeller: 3 UCSF, 2 Stanford, 2 Harvard, 1 Caltech, 1 Scripps, 1 UChicago, 1 UMich, 2 “europe”
So: Harvard’s well represented, even among itself. The question is, is that representative of the role it plays in neuroscience research?
OK as long as I’m on the picture shtick, here are over 400 pictures of women in science via Prerana Srestha. Above are Alfred and Mary Gibson doing something badass. I also quite liked this one of Ruth McGuire (good composition) and this one of Marguerite Wilcox; I don’t quite know what she’s doing, but all those tubes in the background look like cool science to me.