Learn by consuming the brains of your enemies

A few people have sent this my way and asked about it:

In a paper published Monday in the journal eNeuro, scientists at the University of California-Los Angeles reported that when they transferred molecules from the brain cells of trained snails to untrained snails, the animals behaved as if they remembered the trained snails’ experiences…

In experiments by Dr. Glanzman and colleagues, when these snails get a little electric shock, they briefly retract their frilly siphons, which they use for expelling waste. A snail that has been shocked before, however, retracts its siphon for much longer than a new snail recruit.

To understand what was happening in their snails, the researchers first extracted all the RNA from the brain cells of trained snails, and injected it into new snails. To their surprise, the new snails kept their siphons wrapped up much longer after a shock, almost as if they’d been trained.

Next, the researchers took the brain cells of trained snails and untrained snails and grew them in the lab. They bathed the untrained neurons in RNA from trained cells, then gave them a shock, and saw that they fired in the same way that trained neurons do. The memory of the trained cells appeared to have been transferred to the untrained ones.

The full paper is here.

Long and short of this is that there is a particular reflex (memory) that changes when they have experienced a lot of shocks. How memory is encoded is a bit debated but one strongly-supported mechanism (especially in these snails) is that there are changes in the amount of particular proteins that are expressed in some neurons. These proteins might make more of one channel or receptor that makes it more or less likely to respond to signals from other neurons. So for instance, when a snail receives its first shock a neuron responds and it withdraws its gills. Over time, each shock builds up more proteins that make the neuron respond more and more. These proteins are built up by the amount of RNA (the “blueprint” for the proteins, if you will) that are located in the vicinity of the neuron that can receive this information. There are a lot of sophisticated mechanisms that determine how and where these RNAs are built and then shipped off to the place in the neuron where they can be of the most use.

This new paper shows that in these snails, you can just dump the RNA on these neurons from someone else and the RNA has already encoded something about the type of protein it will produce. This is not going to work in most situations (I think?) so it is surprising and cool that it does here! But hopefully you can begin to see what is happening and how the memory is transferring. The RNA is now in the cell, it is now marked in a way that will lead it to produce some protein that will change how the cell responds to input, etc, etc.

One of the people who asked me about this asked specifically in relation to AI. Could this be used as a new method of training in Deep Networks somehow? The closest analogy I can think of is if you have two networks with the same architecture that have been trained in the same way (this is evolution). Then you train a little more, maybe on new stimuli or maybe on a new task, or maybe you are doing reinforcement learning and you have a network that predicts a different action-value pair. Now the analogy would be if you chose a few units (neurons) and directly copied the weights from the first network into the second network. Would this work? Would this be useful? I doubt it, but maybe? But see this interesting paper on knowledge distillation that was pointed to me by John O’Malia.

Advertisements

Controversial topics in Neuroscience

Twitter thread here.

  • Do mice use vision much?
    • They have pretty crappy eyesight and their primary mode of exploration seems to be olfactory/whisker-based
  • How much is mouse cortex like primate cortex?
    • Mouse cortex is claimed to be more multimodal than primate cortex which is more specialized
  • “The brain does deep learning”
    • Deep learning units aren’t exactly like neurons, plus we resent the hype that they have been getting
  • Is there postnatal neurogenesis? Is it real or behaviorally relevant?
    • See recent paper saying it doesn’t exist in humans
  • Brain imaging
    • Does what we see correlate with neural activity? Are we able to correct for multiple comparisons correctly? Does anyone actually correct for multiple comparisons correctly?
  • Bayesian brain
    • Do people actually use their priors? Does the brain represent distributions? etc
  • Konrad Kording
    • Can neuroscientists understand a microprocessor? Is reverse correlation irrelevant?
  • Do mice have a PFC
    • It’s so small!
  • STDP: does it actually exist?
    • Not clear that neurons in the brain actually use STDP – often looks like they don’t. Same with LTP/LTD!
  • How useful are connectomes
    • About as useful as a tangled ball of yarn?
  • LTP is the storage mechanism for memories
    • Maybe it’s all stored in the extracellular space, or the neural dynamics, or something else.
  • Are purely descriptive studies okay or should we always search for mechanism
    • Who cares about things that you can see?!

Updates

  • Does dopamine have ‘a role’?
    • Should we try to claim some unified goal for dopamine, or is it just a molecule with many different downstream effects depending on the particular situation?
  • Do oscillations (‘alpha waves’, ‘gamma waves’, etc) do anything?
    • Are they just epiphenomenon that are correlated with stuff? Or do they actually have a causative role?

Behold, the behaving brain!

In my opinion, THE most important shift in neuroscience over the past few years has been the focus on how behavior changes neural function across the whole brain. Even the sensory systems – supposedly passive passers-on of perfectly produced pictures of the world – will be shifted in unique ways by behavior. An animal walking will have different responses to visual stimuli than an animal that is just sitting around. Almost certainly, other behaviors will have other effects on the animal.

A pair of papers this week have made that point rather elegantly. First, Carsen Stringer and Marius Pachitariu from the Carandini/Harris labs have gobs of data from when they were recording ~10,000 neurons simultaneously. Marius Pachitariu has an excellent twitter thread explaining the work. I just want to take one particular point from this paper which is that you can explain a surprising amount of variance in the primary visual cortex – and all across the brain – simply by looking at the movement of the animal’s face.

In the figures below, they have taken movies of an animal’s face, extracted the motion energy (roughly, how much movement there is at that location in the video), and then used PCA to find the common ways that you can describe that movement. Using this kind of common motion, they then tried to predict the activity of individual neurons – while ignoring the traditional sensory or task information that you would normally be looking at.

The other paper is from Simon Musall and Matt Kaufman in Anne Churchland’s lab. He also has a nice twitter description of their work. Here, they used a technique that is able to image the whole brain simultaneously (though I am not sure to what depth), though at the cost of resolution (individual neurons are not identifiable but are averaged together). The animals are doing a task where they need to tell the difference between two tones, or two flashes of light. You can look for the brain areas involved in choice, or the areas involved in responding to vision or audio, and they are there (choice, kind of?).  But if you look at where movement is being represented it is everywhere.

The things that you would normally look for – the amount of brain activity you can explain by an animal’s decisions or its sensory responses – explain very little unique information.
This latter point is really important. If you had looked at the data and ignored the movement, you would have certainly found neurons that were correlated with decision-making. But once you take into account movement, that correlation drops away – the decisions are too correlated with general movement variables. People need to start thinking about how much of their neural data is responding to the task the animal is doing and how much is due to movement variables that are aligned to the task. This is really important! Simple averaging will not wash away this movement.

There is a lot more to both of these papers and both will be more than worth your time to dig into.

I’m not sure if you would have noticed this effect in either case if they weren’t recording from massive numbers of neurons simultaneously. This is a brave new world of neuroscience. How do we deal with this massively complex behavioral data at the same time that we deal with massive neural populations?

In my mind, the gold standard for how to analyze this data comes from Eva Naumann and James Fitzgerald in a paper out of the Engert lab. They are analyzing data from the whole brain of the zebrafish as it fictively swims around and responds to some moving background. Rather than throwing up their hands at the complexity of this data and the number of moving pieces what they did was very precisely quantify one particular aspect of the behavior. Then they followed the circuit step by step and tried to understand how the quantified behavior was transformed in the circuit. How did the visual stimuli guide the fish’s orientation in the water? What were the different ways the retina represented that visual information? How was this transformed by the relays into the brain? How was this information further transformed in the next step? How did the motor centers generate the different types of behavior that were quantified?

The brain evolved to produce behavior. In my opinion there is no way to understand the brain – any of it – if you don’t understand the behavior that the animal is producing.

Monday Open Question: How many types of neurons are there in the brain?

How many types of neurons are in the brain? Not just number, but classes that represent some fundamental unit of computation? I tweeted an article about this a couple days ago and (justly) got pilloried for saying it counted classes in the brain rather than in two cortical regions. So what is the answer for the whole brain?

Obviously the answer depends on the brain that you are talking about. In the nematode C. elegans, we know that every hermaphrodite has 302 neurons and every male has 381. I believe these specifically male neurons get pruned in the developmental process if the animal does not become a male. These neurons tend to come in symmetric pairs or quartets, one showing up on each side of the body, so the number of neural ‘classes’ is on the order of 118 – though there is evidence that some neurons can be slightly different between their left and right side (ASEL and ASER, for example). Fruit flies (Drosophila) also show sex-specific neurons, with the genes Fruitless and Doublesex controlling whether certain neurons are masculinized or feminized. So not only are there going to be different classes of neurons in males and females, we know that there are single (or, again, symmetric) neurons that control single behaviors. On the other hand, in the fruit fly retina there are definitely distinct classes of neurons that are tiled across the eye. This should frame our thinking about the number of neural classes – there are classes with large numbers of neurons where convolution is useful (repeating the same computation across some space, such as visual or auditory or even musculature space) but perhaps neural function becomes more specific and class-less once specific outputs are needed.

The fruit fly brain may seem a bit silly, why bother comparing it to us cortical mammals? But adult Drosophila have roughly the same number of neurons as larval zebrafish, a vertebrate animal with a cerebrum that is a popular organism to study in neuroscience. So do we think that the zebrafish has just as many pre-planned neurons as Drosophila? Or is its neural structure somewhat looser, more patterned? I don’t have an answer here but I think it is worth thinking about the similarities and differences in these organisms that have similar numbers of neurons but quite different environmental and developmental needs.

Let’s turn to mammals. The area with the most well-defined number of cell classes is probably the retina. I’m not sure of the up-to-date estimates for number of cell classes but the classic description has two classes (rods/cones) in the input layer of the retina which can be further split depending on the number of colors an animal can see – for instance, humans have S, M, and L cones roughly corresponding to blue, green, and red light. This review roughly estimates that further into the eye there are two types of horizontal cells (first layer), ~12 types of bipolar cells (second layer), ~30 types of amacrine cells (third layer). From other sources we think there is something on the order of ~30 types of retinal ganglion cells, the output from the eye into the brain. Interestingly, this is roughly the same number of defined classes that we think the fruit fly has! But again, there may be species specificity here; something on the order of 95%+ of the output layer of monkey retina is a single cell class. So the eye alone has at least 80 classes of neurons and quite probably more.

The olfactory bulb is probably more complex. In mice, at least, the number of olfactory glomuleri that exist is probably on the order of one or two thousand? Though I would expect that once past this layer the classification will look more like retina or cortex – on the order of tens of subtypes.

 

Now let’s think about the cortex. The paper that inspired this post tried to estimate the number of cell classes by using single-cell RNA-sequencing in mice to identify the transcripts that are present in different cells and then attempts to cluster them into distinct sub-classes. It should be clear up front that the number of clusters you identify (1) may not be categorical but could be continuous between types of neurons and (2) may be different than if you clustered with a different method or with different types of data – functional responses, for instance. The authors in this paper make clear that they certainly find cells that look ‘intermediate’ between their clusters so whatever categories we get may not be very firm. For instance, in the following figure the size of the circles represents the number of cells they identify in a particular cluster and the thickness of the line between two circles is how many cells are intermediate between two clusters.

 

Without getting into too many details, they find that in two distinct anatomical regions they find roughly 50 inhibitory neurons that are common in their transcript types between the regions suggesting that the types of inhibition may be a common, repeated pattern across the brain. However, the types of 50 excitatory neurons were essentially unique to each of the two regions..Chuck Stevens has an interesting paper where he attempts to find lower and upper bounds on the number of possible cell classes in cortex. Let’s say that we accept the tiling principle, that the same types of cells are repeated again and again in a motif:

This argument can be extended to the neocortex. Underneath 1 mm2 of most regions of the primate cortical surface are about 105 neurons — the striate cortex is an exception with twice the number — each of which covers say 0.05 mm2 with its dendritic arbor (assumed to be 0.25 mm in diameter). Twenty neurons with dendritic arbors of this size would be required to cover a square millimetre of cortex, so the upper limit on number of cell types, if each must tile the cortex, is 105/20 = 5000, or an average of 1000 per layer. Now assume that the cortex has 10 times more neurons of each type than required to cover the cortex, a redundancy factor of 10 as guessed above for hippocampus: we thus would have about 100 neuron types per layer. If we believe there are a dozen ganglion cell types, two dozen amacrine cell types, and four dozen different kinds of inhibitory neurons in the CA1 region of hippocampus, 100 cell types per layer of neucortex seems like a reasonable number – not good news for the micromodelers.

Let’s update this estimate; we think that there may be 25 excitatory cell types per region. I don’t actually know off-hand the percentage of mouse cortex that these two regions encompass (a motor region, ALM, and a visual region, VISp) but let’s say they are roughly 10% of the cortical area each (this could be grossly wrong so feel free to correct me). We then might believe that cortex has on the order of 25 * 10 ~ 250 excitatory cell classes and ~50 inhibitory cell classes. Does this feel right? 300 classes for all of cortex?

But the cortex is the minority of the number of cells in the brain – the majority is in a single structure, the cerebellum. I don’t know of an estimate of the number of neural classes but a structure that is known for its beautiful tiling neurons seems more likely to have a fair bit of structure in its number of cell classes. What would we estimate here? Something similar to a primary sensory area, with ~50-100 cell classes? Something more something less?

And what about other subcortical regions in the brain like hypothalamus that are more directly responsible for specific behavior? Should we expect many thousands of distinct subtypes for each of the behaviors or something more patterned?

Tell me where I’m wrong.

Why does the eye care about the nose?

The ear, the nose, the eye: all of the neurons closest to the environment are doing on thing: attempting to represent the outside world as perfectly as possible. Total perfection is not possible – you can only only make the eye so large and only need to see so much detail in order live your life. But if you were to try to predict what the neurons in the retina or the ear are doing based on what could provide as much information as possible, you’d do a really good job. Once that information is in the nervous system, the neurons that receive this information can do whatever they want with it, processing it further or turning it directly into a command to blink or jump or just stare into space.

Even though this is the story that all of us neuroscientists get told, it’s not the full thing. Awhile back, I posted that the retina receives input from other places in the brain. That just seems weird from this perspective. If the retina is focused on extracting useful information about the visual world, why would it care about how the world smells?

One simple explanation might be that the neurons only want to code for surprising information. Maybe the nose can help out with that? After all, if something is predictable then it is useless; you already know about it! No need to waste precious bits. This seems to be what the purpose of certain feedback signals to the fly eye are for. A few recent papers have shown that neurons in the eye that respond to horizontal or vertical motion receive signals about how the animal is moving, so that when the animal moves to the left it should expect leftward motion in the horizontal cells – and so only respond to leftward motion that is above and beyond what the animal is causing. But again – what could this have to do with smells?

Let’s think for a second about some times when the olfactory system uses non-olfactory information. The olfactory system should be trying to represent the smell-world as well as it can, just like the visual system is trying to represent the image-world. But the olfactory system is directly modulated depending on the needs of an animal at any given moment. For instance, a hungry fly will release a peptide that modifies how much a set of neurons that respond to particular odors can signal the rest of the brain. In other words, how hungry an animal is determines how well it can smell something!

These two stories – how the eye interacts with the motion of the body, how the nose interacts with hunger – might give us a hint about what is happening. The sensory systems aren’t just trying to represent as much information about the world as possible, they are trying to represent as much information about useful stuff as possible. The classical view of sensory systems is a fundamentally static one, that they have evolved to take advantage of the consistencies in the world to provide relevant information as efficiently as possible*. But the world is a dynamic place, and the needs of an animal at one time are different from the needs of the animal at another.

Take an example from tadpoles. When the tadpole is in a very dim environment, it has a harder time separating dark objects from the background. The world just has less contrast (try turning down the brightness on your screen and reading this – you’ll get the idea). One way that these tadpoles control their ability to increase or decrease contrast is through a neuromodulator that changes the resting potential of a cell (how responsive it is to stimuli), but only over relatively long timescales. This is not fast adaptation but slow adaptation to the changing world. The end result of this is that tadpoles are better able to see moving objects – but presumably at the expense of being worse at seeing something else. That seems like a pretty direct way of going from a need for the animal to code certain visual information more efficiently to the act of doing it. The point is not that this is driven by a direct behavioral need of the animal – I have no idea if this is due to a desire to hunt or avoid objects or what-have-you. Instead, it’s an example of how an animal could control certain information if it wanted to.

This kind of behavioral gating does occur from retinal feedback. Male zebrafish use a combination of smell and sight when they decide how they want to interact with other zebrafish. Certain olfactory neurons that respond to a chemical involved in mating signal to neurons in the retina – making certain cells more or less responsive in the same way that tadpoles control the contrast of their world (above). It appears as if the olfactory information sends a signal to the eye that either gates or enhances the visual information – the stripe detection or what-have-you – that the little fishies use when they want to court another animal.

The sensory system is not perfect. It must make trade-offs about which information is important to keep and which can be thrown away, about how much of its limited bandwidth to spend on one signal or another. A lot of the structure comes naturally from evolution, representing a long-term learning of the structure of the world. But animals have needs that fluctuate over other timescales – and may require more computation than can be provided directly in the sensory area. How else would the eye know that it is time to mate?

What this doesn’t answer is why the modulation is happening here; why not downstream?

 

* This is a major simplification, obviously, and a lot of work has been done on adaptation, etc in the retina.

 

The skeletal system is part of the brain, too

It seems like a fact uniformly forgotten is that the brain is a biological organ just the same as your liver or your spleen or your bones. Its goal – like every other organ – is to keep your stupid collection of cells in on piece. It is one, coherent organism. Just like any other collection of individuals, it needs to communicate in order to work together.

Many different organs are sending signals to the brain. One is your gut, which is innervated by the enteric nervous system. This “other” nervous system contains more neurons (~500 million) than the spinal cord, and about ten times as many neurons as a mouse has in its whole brain. Imagine that: living inside of you is an autonomous nervous system with sensory inputs and motor outputs.

We like to forget this. We like to point to animals like the octopus and ask, what could life be like as an animal whose nervous system is distributed across its body? Well, look in the mirror. What is it like? We have multiple autonomous nervous systems; we have computational processing spread across our body. Have you ever wondered what the ‘mind’ of your gastrointestinal system must think of the mind in the other parts of your body?

The body’s computations about what to do about the world aren’t limited to your nervous system: they are everywhere. This totality is so complete that even your very bones are participating, submitting votes about how you should be interacting with the world. Bones (apparently) secrete neurohormones that directly interact with the brain. These hormones then travel through the blood to make a small set of neurons more excitable, more ready to respond to the world. These neurons then become ready and willing to tell the rest of the brain to eat less food.

This bone-based signaling is a new finding and totally and completely surprising. I don’t recall anyone postulating a bone-brain axis before. Yet it turns out that substantial computations are performed all throughout the body that affect how we think. Animals that are hungry make decisions in a fundamentally different way, willing to become riskier and riskier.

 

A lot of this extra-brain processing is happening on much slower timescales than the fast neuronal processing in the brain: it is integrating information along much longer amounts of time. This mix of fast-and-slow processing is ubiquitous for animals; classification is fast. The body is both fast and slow.

People seem to forget that we are not one silicon instantiation of neural architecture away from replicating humans: we are meat machines.

References

 

MC4R-dependent suppression of appetite by bone-derived lipocalin 2. Nature 2017.

Every spike matters, down to the (sub)millisecond

There was a time when the neuroscience world was consumed by the question of how individual neurons were coding information about the world. Was it in the average firing rate? Or did every precise spike matter, down to the millisecond? Was it, potentially, more complicated?

Like everything else in neuroscience, the answer was resolved in a kind of “it depends, it’s complicated” kind of way. The most important argument against the role of precise spike timing is noise. There is the potential for noise in sensory input, noise between every synapse, noise at every neuron. Why not make the system robust to this noise by taking some time average? On the other hand, if you want to respond quickly you can’t take too much time to average – you need to respond!

Much of the neural coding literature comes from sensory processing where it is easy to control the input. Once you get deeper into the brain, it becomes less clear how much of what the neuron is receiving is sensory and not some shifting mass.

The field has shifted a bit with the rise of calcium indicators which allow imaging the activity of large population of neurons at the expense of timing information. Not only does it sacrifice precise timing information but it can be hard to get connectivity information. Plus, once you start thinking about networks the nonlinear mess makes it hard to think about timing in general.

The straightforward way to decide whether a neuron is using the specific timing of each spike to mean something is to ask whether that timing contains any information. If you jitter the precise position of any given spike my 5 milliseconds, 1 millisecond, half a millisecond – does the neural code become more redundant? Does this make the response of the neuron any more random at that moment in time than it was before?

Just show an animal some movie and record from a neuron that responds to vision. Now show that movie again and again and get a sense of how that neuron responds to each frame or each new visual scene. Then the information is just how stereotyped the response is at any given moment compared to normal, how much more certain you are at that moment than any other moment. Now pick up a spike and move it over a millisecond or so. Is this within the stereotyped range? Then it probably isn’t conveying information over a millisecond. Does the response become more random? Then you’ve lost information.

But these cold statistical arguments can be unpersuasive to a lot of people. It is nice if you can see a picture and just understand. So here is the experiment: songbirds have neurons which directly control the muscles for breathing (respiration). This provides us with a very simple input/output system, where the input is the time of a spike and the output is the air pressure exerted by the muscle. What happens when we provide just a few spikes and move the precise timing of one of these spikes?

The beautiful figure above is one of those that is going directly into my bag’o’examples. What it shows is a sequence of three induced spikes (upper right) where the time of the middle spike changes. The main curves are the how the pressure changes with the different timing in spikes. You can’t get much clearer than that.

Not only does it show, quite clearly, that the precise time of a single spike matters but that it matters in a continuous fashion – almost certainly on a sub-millisecond level.

Update:

The twitter thread on this post ended up being useful, so let me clarify a few things. First, the interesting thing about this paper is not that the motor neurons can precisely control the muscle; it is that when they record the natural incoming activity, it appears to provide information on the order of ~1ms; and the over-represented patterns of spikes include the patterns in the figure above. So the point is that these motor neurons are receiving information on the scale of one millisecond and that the information in these patterns has behaviorally-relevant effects.

Some other interesting bits of discussion came up. What doesn’t use spike-timing information? Plenty of sensory systems do; I thought at first that maybe olfaction doesn’t but of course I was wrong. Here’s a hypothesis: all sensory and motor systems do (eg, everything facing the outside world). (Although, read these papers). When would you expect spike-timing to not matter? When the number of active input neurons are large and uncorrelated. Does spike timing make sense for Deep Networks where the neurons are implicitly representing firing rates? Here is a paper that breaks it down into rate and phase.

References

Srivastava KH, Holmes CM, Vellema M, Pack AR, Elemans CP, Nemenman I, & Sober SJ (2017). Motor control by precisely timed spike patterns. Proceedings of the National Academy of Sciences of the United States of America, 114 (5), 1171-1176 PMID: 28100491

Nemenman I, Lewen GD, Bialek W, & de Ruyter van Steveninck RR (2008). Neural coding of natural stimuli: information at sub-millisecond resolution. PLoS computational biology, 4 (3) PMID: 18369423

Studying the brain at the mesoscale

It i snot entirely clear that we are going about studying the brain in the right way. Zachary Mainen, Michael Häusser and Alexandre Pouget have an alternative to our current focus on (relatively) small groups of researchers focusing on their own idiosyncratic questions:

We propose an alternative strategy: grass-roots collaborations involving researchers who may be distributed around the globe, but who are already working on the same problems. Such self-motivated groups could start small and expand gradually over time. But they would essentially be built from the ground up, with those involved encouraged to follow their own shared interests rather than responding to the strictures of funding sources or external directives…

Some sceptics point to the teething problems of existing brain initiatives as evidence that neuroscience lacks well-defined objectives, unlike high-energy physics, mathematics, astronomy or genetics. In our view, brain science, especially systems neuroscience (which tries to link the activity of sets of neurons to behaviour) does not want for bold, concrete goals. Yet large-scale initiatives have tended to set objectives that are too vague and not realistic, even on a ten-year timescale.

Here are the concrete steps they suggest in order to from a successful ‘mesoscale’ project:

  1. Focus on a single brain function.
  2. Combine experimentalists and theorists.
  3. Standardize tools and methods.
  4. Share data.
  5. Assign credit in new ways.

Obviously, I am comfortable living on the internet a little more than the average person. But with the tools that are starting to proliferate for collaborations – Slack, github, and Skype being the most frequently used right now – there is really very little reason for collaborations to extend beyond neighboring labs.

The real difficulties are two-fold. First, you must actually meet your collaborators at some point! Generating new ideas for a collaboration rarely happens without the kind of spontaneous discussions that arise when physically meeting people. When communities are physically spread out or do not meet in a single location, this can happen less than you would want. If nothing else, this proposal seems like a call for attending more conferences!

Second is the ad-hoc way data is collected. Calls for standardized datasets have been around about as long as there has been science to collaborate on and it does not seem like the problem is being solved any time soon. And even when datasets have been standardized, the questions that they had been used for may be too specific to be of much utility to even closely-related researchers. This is why I left the realm of pure theory and became an experimentalist as well. Theorists are rarely able to convince experimentalists to take the time out of their experiments to test some wild new theory.

But these mesoscale projects really are the future. They are a way for scientists to be more than the sum of their parts, and to be part of an exciting community that is larger than one or two labs! Perhaps a solid step in this direction would be to utilize the tools that are available to initiate conversations within the community. Twitter does this a little, but where are the foraging Slack chats? Or amygdala, PFC, or evidence-accumulation communities?

Your eyes are your own

Retinal mosaic

This blows my mind. There is a technique that allows us to map the distribution of cones in a person’s eyes. You would think that there is some consistency from individual to individual, or that it would be distributed in some predictable manner but! No!

What happens when you show each of these people a flash of color and ask them to just give us a name? Something like you’d expect – the person who only seems to have red cones seems to name just about everything either red or white. Those with a broader distribution of cones give a broader distribution of colors. You can even predict the colors that someone will name based solely on the tiling of these cones.color naming

And none of these people are technically color blind! What kind of a world is BS seeing??

Reference

Brainard, D., Williams, D., & Hofer, H. (2008). Trichromatic reconstruction from the interleaved cone mosaic: Bayesian model and the color appearance of small spots Journal of Vision, 8 (5) DOI: 10.1167/8.5.15

CSHL Vision Course

cshlfirstday

I have just returned from two weeks in Cold Spring Harbor at the Computational Neuroscience: Vision course. I was not entirely sure what to expect. Maybe two weeks of your standard lectures? No – this was two weeks of intense scientific discussion punctuated with the occasional nerf fight (sometimes during lectures, sometimes not), beach bonfire, or table tennis match.

It is not just the material that was great but the people. Every day brought in a fresh rotation of scientists ready to spend a couple of days discussing their work – and the work of the field – and to just hang out. I learned as much or more at the dinner table as I did in the seminar room. But it wasn’t just the senior scientists that were exhilarating but the other students. It is a bit intimidating seeing how much talent exists in the field… And how great they are as people.

I also found that the graduate students at CSHL have the benefit of attending these courses (for free). It was great to meet people from all of the labs and hear about the cool stuff going on. Of course, they live pretty well, too. Here is the back patio of my friend’s house:

cshlpdhouse

I think I could get used to that?

Anyway, this is all a long-winded way of saying: if you get the chance, attend one of these courses! And being there motivated me to start making more of an effort to update the blog again. I swear, I swear…

cshllastday