Illusions are life


Just adding the right combination of grey and white crosses really screws things up, doesn’t it? It seems likely that the illusion comes from the perceived illumination (it probably helps that these are essentially Gabors).

There’s a nice reminder in Science this week that we are not the only animals subject to illusions – here is one in yeast (from the abstract):

We systematically monitored growth of yeast cells under various frequencies of oscillating osmotic stress. Growth was severely inhibited at a particular resonance frequency, at which cells show hyperactivated transcriptional stress responses. This behavior represents a sensory misperception—the cells incorrectly interpret oscillations as a staircase of ever-increasing osmolarity. The misperception results from the capacity of the osmolarity-sensing kinase network to retrigger with sequential osmotic stresses. Although this feature is critical for coping with natural challenges—like continually increasing osmolarity—it results in a tradeoff of fragility to non-natural oscillatory inputs that match the retriggering time.

In other words, a very non-natural stimulus – a periodic change in salt concentration – leads the yeast to instead ‘see’ a constant increase in the concentration. Pretty cool.

(via Kyle Hill)

Recording thousands of cells like it’s nobody’s business

Is this what the world is now? Recording thousands of cells per paper? After the 14000 neuron magnum opus from Markram, comes a paper from Jiang et al recording 11000 neurons. When your paper is tossing off bombs like:

We performed simultaneous octuple whole-cell recordings in acute slices prepared from the primary visual cortex (area V1)

and figures like:

octuple recordings

you know you are doing something right. How much do you think this cost compared to the Blue Brain project? (Seriously: I have no sense of the scale of the costs for BBP, nor this.)

I will try to read this more closely later, but I will leave you with the abstract and some neural network candy for now:

Since the work of Ramón y Cajal in the late 19th and early 20th centuries, neuroscientists have speculated that a complete understanding of neuronal cell types and their connections is key to explaining complex brain functions. However, a complete census of the constituent cell types and their wiring diagram in mature neocortex remains elusive. By combining octuple whole-cell recordings with an optimized avidin-biotin-peroxidase staining technique, we carried out a morphological and electrophysiological census of neuronal types in layers 1, 2/3, and 5 of mature neocortex and mapped the connectivity between more than 11,000 pairs of identified neurons. We categorized 15 types of interneurons, and each exhibited a characteristic pattern of connectivity with other interneuron types and pyramidal cells. The essential connectivity structure of the neocortical microcircuit could be captured by only a few connectivity motifs.

NN candy

Read it here.

This man has neuroscientists FURIOUS!

I just returned from a vacation in London and Scotland, which is a fantastic way to clear your mind. Then I returned and asked people, what did I miss? Apparently this which instantly clouded my mind back up.

Here’s a suggestion: never trust an article called “Here’s Why Most Neuroscientists Are Wrong About the Brain”. Especially when the first sentence is flat-out wrong:

Most neuroscientists believe that the brain learns by rewiring itself—by changing the strength of connections between brain cells, or neurons

Okay maybe not wrong – that is one, though not the only, way that neuroscientists think the brain learns – but certainly misleading in the extreme. What are some of the non-synaptic mechanisms that contribute to plasticity?

  1. Homeostatic plasticity/intrinsic excitability
  2. eg. dopamine-related (all sorts of transcription changes)
  3. Developmental plasticity
  4. Attractor states (we can quibble about whether this counts)
  5. Neurogenesis
  6. Dendritic excitability

That’s what I can come up with off top of my head; I am sure that there are more. I’ll just focus on one for a second – because I think it is pretty cool – the intrinsic excitability of a neuron. Neurons maintain a balance of hyperpolarizing and depolarizing ion channels in order to control how often they fire, as well as how responsive they are to input in general. Now a simple experiment is to simply block the ability of a neuron to fire for, say, a day. When you remove the blockade, the cell will now be firing much more rapidly. It is pretty easy to imagine that all sorts of things can happen to a network when a cell fundamentally changes how it is firing action potentials. [For more, I always think of Gina Turrigiano in connection to this literature.]


I also wonder about this quote in the article:

Neuroscientists have not come to terms with this truth. I have repeatedly asked roomfuls of my colleagues, first, whether they believe that the brain stores information by changing synaptic connections—they all say, yes—and then how the brain might store a number in an altered pattern of synaptic connections. They are stumped, or refuse to answer.

Perhaps he has was unclear when asking the question because this is a solved problem. Here is a whole chapter on encoding numbers with ‘synaptic’ weights. Is this how the brain does it? Probably not. But it is trivially easy to train a classic neural network to store numbers.

I do not mean to harp on the author of this piece, but these are interesting questions that he is raising! I love molecules. Neuroscience often enjoys forgetting about them for simplicity. But it is important that neuroscientists are clear on what we already know – and what we actually do not.

[note: sorry, I meant to post this late last week but my auto-posting got screwed up. I’m going to blame this on jet lag…]

Behold, The Blue Brain

The Blue Brain project releases their first major paper today and boy, it’s a doozy. Including supplements, it’s over 100 pages long, including 40 figures and 6 tables. In order to properly understand everything in the paper, you have to go back and read a bunch of other papers they have released over their years that detail their methods. This is not a scientific paper: it’s a goddamn philosophical treatise on The Nature of Neural Reconstruction.

The Blue Brain Project – or should I say Henry Markram? it is hard to say where the two diverge – aims to simulate absolutely everything in a complete mammalian brain. Except right now it sits at middle-ground: other simulations have replicated more neurons (Izhikevich had a model with 10^11 neurons of 21 subtypes). At the other extreme, MCell has completely reconstructed everything about a single neuron – down to the diffusion of single atoms – in a way that Blue Brain does not.

The focus of Blue Brain right now is a certain level of simulation that derives from a particular mindset in neuroscience. You see, people in neuroscience work at all levels: from the individual molecules to flickering ion channels to single neurons up to networks and then whole brain regions. Markram came out of Bert Sakmann’s lab (where he discovered STDP) and has his eye on the ‘classical’ tradition that stretches back to Hodgkin and Huxley. He is measuring distributions of ion channels and spiking patterns and extending the basic Hodgkin-Huxley model into tinier compartments and ever more fractal branching patterns. In a sense, this is swimming against the headwinds of contemporary neuroscience. While plenty of people are still doing single-cell physiology, new tools that allow imaging of many neurons simultaneously in behaving animals have reshaped the direction of the field – and what we can understand about neural networks.

Some very deep questions arise here: is this enough? What will this tell us and what can it not tell us? What do we mean when we say we want to simulate the brain? How much is enough? We don’t really know – though the answer to the first question is assuredly no – and we assuredly don’t know enough to even begin to answer the second set of questions.


The function of the new paper is to collate in one place all of the data that they have been collecting – and it is a doozy. They report having recorded and labeled >14,000 (!!!!!) neurons from somatosensory cortex of P14 rats with full reconstruction of more than 1,000 of these neurons. That’s, uh, a lot. And they use a somewhat-convoluted terminology to describe all of these, throwing around terms like ‘m-type’ and ‘e-type’ and ‘me-type’ in order to classify the neurons. It’s something, I guess.


Since the neurons were taken from different animals at different times, they do a lot of inference to determine connectivity, ion channel conductance, etc. And that’s a big worry because – how many parameters are being fit here? How many channels are being missed? You get funny sentences in the paper like:

[We compared] in silico (edmodeled) PSPs with the corresponding in vitro (ed – measured in a slice prep) PSPs. The in silico PSPs were systematically lower (ed– our model was systematically different from the data). The results suggested that reported conductances are about 3-fold too low for excitatory connections, and 2-fold too low for inhibitory connections.

And this worries me a bit; are they not trusting their own measurements when it suits them? Perhaps someone who reads the paper more closely can clarify these points.

They then proceed to run these simulated neurons and perform ‘in silico experiments’. They first describe lowering the extracellular calcium level and finding that the network transitions from a regularly spiking state to a variable (asynchronous) state. And then they go, and do this experiment on biological neurons and get the same thing! That is a nice win for the model; they made a prediction and validated it.

On the other hand you get statements like the following:

We then used the virtual slice to explore the behavior of the microcircuitry for a wide range of tonic depolarization and Ca2+ levels. We found a spectrum of network states ranging from one extreme, where neuronal activity was largely synchronous, to another, where it was largely asynchronous. The spectrum was observed in virtual slices, constructed from all 35 individual instantiations of the microcircuit  and all seven instantiations of the average microcircuit.

In other words, it sounds like they might be able to find everything in their model.

But on the other hand…! They fix their virtual networks and ask: do we see specific changes in our network that experiments have seen in the past? And yes, generally they do. Are we allowed to wonder how many of these experiments and predictions did they do that did not pan out? It would have been great to see a full-blown failure to understand where the model still needs to be improved.

I don’t want to understand the sheer amount of work that was done here, or the wonderful collection of data that they now have available. The models that they make will be (already are?) available for anyone to download and this is going to be an invaluable resource. This is a major paper, and rightly so.

On the other hand – what did I learn from this paper? I’m not sure. The network wasn’t really doing anything, it just kind of…spiked. It wasn’t processing structured information like an animal’s brain would, it was just kind of sitting there, occasionally having an epileptic fit (note that at one point they do simulate thalamic input into the model, which I found to be quite interesting).

This project has metamorphosed into a bit of a social conundrum for the field. Clearly, people are fascinated – I had three different people send me this paper prior to its publication, and a lot of others were quite excited and wanted access to it right away. And the broader Blue Brain Project has had a somewhat unhappy political history. A lot of people – like me! – are strong believers in computation and modeling, and would really like it see it succeed. Yet what the chunk of neuroscience that they have bitten off, and the way they have gone about it, lead many to worry. The field had been holding its breath a bit to see what Blue Brain was going to release – and I think they will need to hold their breath a bit longer.


Markram et al (2015). Reconstruction and Simulation of Neocortical Microcircuitry Cell


Get your name on the postdoc job list

Executive Summary:

Are you a postdoc who is looking for a job, or will be soon? Add yourself to The List.

Are you on a search committee and want access to the list? Email from an official university e-mail account.

Leslie Vosshall took to twitter recently to lament the fact that only 25% of applicants for tenure-track positions at Rockefeller University are female. Others chimed in and said this was a common problem. You can see the discussion that followed at Drugmonkey.

So wouldn’t it be great if there was a list of researchers who were looking for jobs – male and female – that search committees could peruse to search for applicants who were relevant but had not contact them? I went ahead and began the makings of such a list here.

I am not yet totally sure what the best way to do this is. Should it be partially public? Or simply private? This is not a list for other postdocs to peruse, but one for faculty on search committees to examine. Is there other information that would be helpful that is not on the form?

As always, go to neurorumblr for a list of open faculty positions. There are over 80 up already! This information will be added to the page soon (when I beautify it, because honestly that thing is pretty hideous right now).

Beth Stevens, 2015 MacArthur Fellow

Congratulationst to Beth Stevens of Harvard for receiving a 2015 MacArthur fellowship. Her work has focused on microglia, which is surely an under-researched topic.

The last neuroscientist to win the fellowship was Sheila Nirenberg of Cornell in 2013 for her work on retinal prostheses (though she is better known for her excellent work on neural coding).

Not your typical science models


Cell has an article showcasing other animal model candidates beyond the typical C. elegansDrosophila, mice, etc. Really it is just a list of people using other models explaining why they use them, but it is pretty cool to learn about what they are doing. They list Thellungiella sp., axolotl, naked mole rats, lampreys (which look terrifying), honey bees, bats, mouse lemurs (with which Tony Movshon famously trolled all rodent vision scientists), turquoise killfish, and songbirds. Because I love bees, here is the bee explanation:

Honey bees (Apis mellifera) provide remarkable opportunities for understanding complex behavior, with systems of division
of labor, communication, decision making, and social aging/immunity. They teach us how social behaviors develop from solitary behavioral modules, with only minor ‘‘tweaking’’ of molecular networks. They help us unravel the fundamental properties of learning, memory, and symbolic language. They reveal the dynamics of collective decision making
and how social plasticity can change epigenetic brain programming or reverse brain aging. They show us the mechanistic basis of trans-generational immune priming in invertebrates, perhaps facilitating the first vaccines for insects.

These processes and more can be studied across the levels of biological complexity—from genes to societies and over multiple timescales—from action potential to evolutionary. As models in neuroscience and animal behavior, honey bees have batteries of established research tools for brain/behavioral patterns, sensory perception, and cognition. Genome sequence, molecular tools, and a number of functional genomic tools are also available. With a relatively large-sized body (1.5 cm) and brain (1 mm3), this fascinating animal is, additionally, easy to work with for students of all ages.

Beekeeping practices date as early as the Minoan Civilization, where the bee symbolized a Mother Goddess. Today, we increasingly value honey bees as essential pollinators of commercial crops and for their ecosystem services. Honey bees have been called keepers of the logic of life. They are truly.

I would add mosquitoes, ants, deer mice, (prairie/etc) voles, cuttlefish, jellyfish and of course marmosets to the list.

Your friends determine your economy

What is it that distinguishes economies that take advantage of new products from those that don’t?

Matthew Jackson visited Princeton last week and gave a seminar on “Information and Gossip in Networks”. It was sadly lacking in any good gossip (if you have any, please send it to me), but he gave an excellent talk on how a village’s social network directly affects its economy.

He was able to collect data from a microfinance institution in India that began offering credit in 75 villages in Kerala. Yet despite being relatively homogeneous – they were all small, poor, widely dispersed villages in a single Indian state – there was a large amount of variability in how many people in each village participated in the program. What explains this?

Quite simply, the social connections do. When the microfinance institution entered the village, they did so by approaching village leaders and told them about the program, about its advantages and why they should participate. These village leaders were then responsible for informing the people in their village about the program.

social network

Jackson’s team was able to compile the complete social networks of everyone in these villages. They knew who went to temple with whom, who they trusted enough to lend money to, who they considered their friends, and so on. It is quite an impressive bit of work; unfortunately I cannot find any of his examples online anywhere. They found, for instance, incredible segregation by caste (not surprising, but nice that it falls out so naturally out of the data).

What determined the participation rate was how connected the leaders were to the rest of their village. Not just how many friends they had, but how many friends their friends had, and so on. To get an even better fit, they modeled the decision as a diffusion from the leaders out to their friends. They would slowly, randomly tell some of their friends, who would tell some of their other friends, and so on.

network diffusion

Jackson said that he got a rho^2 of 0.3 looking at traditional centrality measures and 0.47  (50% improvement) if you use his new model. The main difference with his new model (‘diffusion centrality’) appears to be time, which makes sense. When a program has been in a village for longer, more people will have taken advantage of it; people do not all rush out to get the Hot New Thing on the first day they can.

Village leaders are not the only people that they could have told. It would be nice if they could find more central individuals – people even better connected than the leaders. Impressively, they find that they can simply ask any random adult who would be the best person in the village to tell? And there is a good chance that they would know. This is exciting – it means people implicitly know about the social network structure of their world.

The moral of the story is that in order to understand economic processes, you need to understand the structure of the economy and you need to understand the dynamics. Static processes are insufficient – or at least, are much, much noisier.


Banerjee, A., Chandrasekhar, A., Duflo, E., & Jackson, M. (2014). Gossip: Identifying Central Individuals in a Social Network SSRN Electronic Journal DOI: 10.2139/ssrn.2425379

Banerjee A, Chandrasekhar AG, Duflo E, & Jackson MO (2013). The diffusion of microfinance. Science (New York, N.Y.), 341 (6144) PMID: 23888042

A quick primer in uploading your brain

Amy Harmon has an article in the New York Times about cryonics and immortality that is heartrending and beautiful. Here is what the neuroscientists have to say:

“I can see within, say, 40 years that we would have a method to generate a digital replica of a person’s mind,” said Winfried Denk, a director at the Max Planck Institute of Neurobiology in Germany, who has invented one of several mapping techniques. “It’s not my primary motivation, but it is a logical outgrowth of our work.”

Other neuroscientists do not take that idea seriously, given the great gaps in knowledge about the workings of the brain. “We are nowhere close to brain emulation given our current level of understanding,” said Cori Bargmann, a neuroscientist at Rockefeller University in New York and one of the architects of the Obama administration’s initiative seeking a $4.5 billion investment in brain research over the next decade.

“Will it ever be possible?” she asked. “I don’t know. But this isn’t 50 years away.”

…The fundamental question of how the brain’s physical processes give rise to thoughts, feelings and behavior, much less how to simulate them, remains a mystery. So many neuroscientists see the possibility of reproducing an individual’s consciousness as unforeseeably far off.

“We have to recognize that there are many huge gaps that have to be leaped over,” said Stephen J. Smith, a neuroscientist at the Allen Institute for Brain Science in Seattle. “The brain is holding on to many of its secrets.”

Jeffrey Lichtman, a Harvard University neuroscientist, said, “Nothing happening now is close to a reality where a human patient might imagine that their brain could be turned into something that could be reproduced in silico.”

Count my on the Bargmann side of things (as I am in most things).

  1. Presumably we need to be able to reconstruct the 3D morphology of every cell body, axon, synaptic bouton, and vesicle
  2. Also, reconstruct all of those mysterious glia. What do they do, again?
  3. Is there a consistent learning rule between neurons? I cannot find my normal reference for this, but long-term potentiation, depression, STDP, etc is not the same in every anatomical region. There is a reason most LTP work was historically done in hippocampus.
  4. What about the extracellular matrix? Does it store our memories?
  5.  Don’t forget hormones. They can directly enter cells; think about what we need to know to simulate their diffusion throughout the brain.
  6. Are you planning on sequencing your microbiome as well? How much of the rest of your body contributes to your cognition?
  7. What genes, where, how are they transcribed, when are they transcribed, how do the enhancers interact with etc etc.
  8. Have you ever seen an image of the molecular pathways that contribute to neuronal function? Here is a (likely incomplete) description of the pathway for dopamine in a single cell. Remember, every cell is different and has how many modulatory pathways…?

In other words, we are probably not done solving biology very soon.

In neuroscience, there are more unknown unknowns than there are known unknowns. 40 years for ‘mind uploading’ is not even wrong.

(For the record, I would have my ‘mind’ uploaded the moment it is possible. I do think it is theoretically possible, to some approximation. But sadly, that won’t happen in my lifetime.)

Unrelated to all that, 8/29 edition

The time, where has it gone?

Top 8 industrial robot companies and how many robots they have around the world

  1. Yaskawa – 300,000
  2. ABB – 250,000
  3. Fanuc – 250,000
  4. Kawasaki –110,000
  5. Kuka – 80,000
  6. Denso – 80,000
  7. Epson – 45,000
  8. Adept – 25,000

A Neural Algorithm of Artistic Style


Like DeepDream only not super-creepy


A history of graphic design: the San Francisco school

The word “psychedelic” is a combination of the Greek words psyche and delos, and means “mind manifesting” or “soul manifesting.” Contrary to a prevalent myth about the role of mind-altering drugs in creation of this art, most revolutionary artists of this school including Wes Wilson, Victor Moscoso, Bonnie MacLean, Lee Conklin, Stanley Mouse and others had artistic talent, some formal art training and adhered to strict aesthetic discipline. Almost all the experiments that relied solely on the hallucinating impact of psychedelic drugs ended-up in total failure.


Asking Questions

asking questions

Top 10 signs that a paper/field is bogus

  1. Run the numbers. One consistent issue in molecular biology is that because it tends to be so qualitative, we have little sense for magnitudes and plausibility of various mechanisms. That said, we now are getting to the point where we have a lot more quantitative data that lets us run some basic sanity checks (BioNumbers is a great resource for this). An example that I’ve come across often is mRNA localization. Many people I’ve met have, umm, fairly fanciful notions of the degree to which mRNA is localized. From what we’ve seen in the lab, almost every mRNA seems to just be randomly distributed around the cytoplasm, with the exception being ER-localized ones, which are, well, localized to the ER. Ask yourself: why should there be any mRNA localization? Numbers indicate that proteins diffuse quite rapidly around the cell, on a timescale that is likely faster than mRNA transport. So for most cells, the numbers say that you shouldn’t localize mRNA–rather, just localize proteins. And, uh, that’s what we see…
  2. Consider why nobody has seen this Amazing New Phenomenon before. Was it a lack of technology? Okay, then it might be real. Was it just brute force? Also possible that it’s real. Was it just waiting for someone to think of the idea? Well, in my experience, nutty ideas are relatively cheap.
  3. etc

aka How to actually Think Like A Scientist

DNA Pac Man

DNA pac man