Illusions are life

OiM3Jop

Just adding the right combination of grey and white crosses really screws things up, doesn’t it? It seems likely that the illusion comes from the perceived illumination (it probably helps that these are essentially Gabors).

There’s a nice reminder in Science this week that we are not the only animals subject to illusions – here is one in yeast (from the abstract):

We systematically monitored growth of yeast cells under various frequencies of oscillating osmotic stress. Growth was severely inhibited at a particular resonance frequency, at which cells show hyperactivated transcriptional stress responses. This behavior represents a sensory misperception—the cells incorrectly interpret oscillations as a staircase of ever-increasing osmolarity. The misperception results from the capacity of the osmolarity-sensing kinase network to retrigger with sequential osmotic stresses. Although this feature is critical for coping with natural challenges—like continually increasing osmolarity—it results in a tradeoff of fragility to non-natural oscillatory inputs that match the retriggering time.

In other words, a very non-natural stimulus – a periodic change in salt concentration – leads the yeast to instead ‘see’ a constant increase in the concentration. Pretty cool.

(via Kyle Hill)

Recording thousands of cells like it’s nobody’s business

Is this what the world is now? Recording thousands of cells per paper? After the 14000 neuron magnum opus from Markram, comes a paper from Jiang et al recording 11000 neurons. When your paper is tossing off bombs like:

We performed simultaneous octuple whole-cell recordings in acute slices prepared from the primary visual cortex (area V1)

and figures like:

octuple recordings

you know you are doing something right. How much do you think this cost compared to the Blue Brain project? (Seriously: I have no sense of the scale of the costs for BBP, nor this.)

I will try to read this more closely later, but I will leave you with the abstract and some neural network candy for now:

Since the work of Ramón y Cajal in the late 19th and early 20th centuries, neuroscientists have speculated that a complete understanding of neuronal cell types and their connections is key to explaining complex brain functions. However, a complete census of the constituent cell types and their wiring diagram in mature neocortex remains elusive. By combining octuple whole-cell recordings with an optimized avidin-biotin-peroxidase staining technique, we carried out a morphological and electrophysiological census of neuronal types in layers 1, 2/3, and 5 of mature neocortex and mapped the connectivity between more than 11,000 pairs of identified neurons. We categorized 15 types of interneurons, and each exhibited a characteristic pattern of connectivity with other interneuron types and pyramidal cells. The essential connectivity structure of the neocortical microcircuit could be captured by only a few connectivity motifs.

NN candy

Read it here.

This man has neuroscientists FURIOUS!

I just returned from a vacation in London and Scotland, which is a fantastic way to clear your mind. Then I returned and asked people, what did I miss? Apparently this which instantly clouded my mind back up.

Here’s a suggestion: never trust an article called “Here’s Why Most Neuroscientists Are Wrong About the Brain”. Especially when the first sentence is flat-out wrong:

Most neuroscientists believe that the brain learns by rewiring itself—by changing the strength of connections between brain cells, or neurons

Okay maybe not wrong – that is one, though not the only, way that neuroscientists think the brain learns – but certainly misleading in the extreme. What are some of the non-synaptic mechanisms that contribute to plasticity?

  1. Homeostatic plasticity/intrinsic excitability
  2. eg. dopamine-related (all sorts of transcription changes)
  3. Developmental plasticity
  4. Attractor states (we can quibble about whether this counts)
  5. Neurogenesis
  6. Dendritic excitability

That’s what I can come up with off top of my head; I am sure that there are more. I’ll just focus on one for a second – because I think it is pretty cool – the intrinsic excitability of a neuron. Neurons maintain a balance of hyperpolarizing and depolarizing ion channels in order to control how often they fire, as well as how responsive they are to input in general. Now a simple experiment is to simply block the ability of a neuron to fire for, say, a day. When you remove the blockade, the cell will now be firing much more rapidly. It is pretty easy to imagine that all sorts of things can happen to a network when a cell fundamentally changes how it is firing action potentials. [For more, I always think of Gina Turrigiano in connection to this literature.]

 

I also wonder about this quote in the article:

Neuroscientists have not come to terms with this truth. I have repeatedly asked roomfuls of my colleagues, first, whether they believe that the brain stores information by changing synaptic connections—they all say, yes—and then how the brain might store a number in an altered pattern of synaptic connections. They are stumped, or refuse to answer.

Perhaps he has was unclear when asking the question because this is a solved problem. Here is a whole chapter on encoding numbers with ‘synaptic’ weights. Is this how the brain does it? Probably not. But it is trivially easy to train a classic neural network to store numbers.

I do not mean to harp on the author of this piece, but these are interesting questions that he is raising! I love molecules. Neuroscience often enjoys forgetting about them for simplicity. But it is important that neuroscientists are clear on what we already know – and what we actually do not.

[note: sorry, I meant to post this late last week but my auto-posting got screwed up. I’m going to blame this on jet lag…]