Learn by consuming the brains of your enemies

A few people have sent this my way and asked about it:

In a paper published Monday in the journal eNeuro, scientists at the University of California-Los Angeles reported that when they transferred molecules from the brain cells of trained snails to untrained snails, the animals behaved as if they remembered the trained snails’ experiences…

In experiments by Dr. Glanzman and colleagues, when these snails get a little electric shock, they briefly retract their frilly siphons, which they use for expelling waste. A snail that has been shocked before, however, retracts its siphon for much longer than a new snail recruit.

To understand what was happening in their snails, the researchers first extracted all the RNA from the brain cells of trained snails, and injected it into new snails. To their surprise, the new snails kept their siphons wrapped up much longer after a shock, almost as if they’d been trained.

Next, the researchers took the brain cells of trained snails and untrained snails and grew them in the lab. They bathed the untrained neurons in RNA from trained cells, then gave them a shock, and saw that they fired in the same way that trained neurons do. The memory of the trained cells appeared to have been transferred to the untrained ones.

The full paper is here.

Long and short of this is that there is a particular reflex (memory) that changes when they have experienced a lot of shocks. How memory is encoded is a bit debated but one strongly-supported mechanism (especially in these snails) is that there are changes in the amount of particular proteins that are expressed in some neurons. These proteins might make more of one channel or receptor that makes it more or less likely to respond to signals from other neurons. So for instance, when a snail receives its first shock a neuron responds and it withdraws its gills. Over time, each shock builds up more proteins that make the neuron respond more and more. These proteins are built up by the amount of RNA (the “blueprint” for the proteins, if you will) that are located in the vicinity of the neuron that can receive this information. There are a lot of sophisticated mechanisms that determine how and where these RNAs are built and then shipped off to the place in the neuron where they can be of the most use.

This new paper shows that in these snails, you can just dump the RNA on these neurons from someone else and the RNA has already encoded something about the type of protein it will produce. This is not going to work in most situations (I think?) so it is surprising and cool that it does here! But hopefully you can begin to see what is happening and how the memory is transferring. The RNA is now in the cell, it is now marked in a way that will lead it to produce some protein that will change how the cell responds to input, etc, etc.

One of the people who asked me about this asked specifically in relation to AI. Could this be used as a new method of training in Deep Networks somehow? The closest analogy I can think of is if you have two networks with the same architecture that have been trained in the same way (this is evolution). Then you train a little more, maybe on new stimuli or maybe on a new task, or maybe you are doing reinforcement learning and you have a network that predicts a different action-value pair. Now the analogy would be if you chose a few units (neurons) and directly copied the weights from the first network into the second network. Would this work? Would this be useful? I doubt it, but maybe? But see this interesting paper on knowledge distillation that was pointed to me by John O’Malia.

Advertisements

Controversial topics in Neuroscience

Twitter thread here.

  • Do mice use vision much?
    • They have pretty crappy eyesight and their primary mode of exploration seems to be olfactory/whisker-based
  • How much is mouse cortex like primate cortex?
    • Mouse cortex is claimed to be more multimodal than primate cortex which is more specialized
  • “The brain does deep learning”
    • Deep learning units aren’t exactly like neurons, plus we resent the hype that they have been getting
  • Is there postnatal neurogenesis? Is it real or behaviorally relevant?
    • See recent paper saying it doesn’t exist in humans
  • Brain imaging
    • Does what we see correlate with neural activity? Are we able to correct for multiple comparisons correctly? Does anyone actually correct for multiple comparisons correctly?
  • Bayesian brain
    • Do people actually use their priors? Does the brain represent distributions? etc
  • Konrad Kording
    • Can neuroscientists understand a microprocessor? Is reverse correlation irrelevant?
  • Do mice have a PFC
    • It’s so small!
  • STDP: does it actually exist?
    • Not clear that neurons in the brain actually use STDP – often looks like they don’t. Same with LTP/LTD!
  • How useful are connectomes
    • About as useful as a tangled ball of yarn?
  • LTP is the storage mechanism for memories
    • Maybe it’s all stored in the extracellular space, or the neural dynamics, or something else.
  • Are purely descriptive studies okay or should we always search for mechanism
    • Who cares about things that you can see?!

Updates

  • Does dopamine have ‘a role’?
    • Should we try to claim some unified goal for dopamine, or is it just a molecule with many different downstream effects depending on the particular situation?
  • Do oscillations (‘alpha waves’, ‘gamma waves’, etc) do anything?
    • Are they just epiphenomenon that are correlated with stuff? Or do they actually have a causative role?

What HASN’T Deep Learning replicated from the brain?

The brain represents the world in particular ways. Here are a few:

1. The visual world on the retina

The retina is thought to whiten images, or transform them so that they always have roughly the same average, maximum and minimum (so that you can see in very bright and very dark environments. This was originally shown very nicely in two papers from Atick and Redlich (1990, 1992). Essentially, you want to smooth the visual scene around each point depending on the noise. You get receptive fields that look something like this:

Or more generally this:

A denoising autoencoder – a network that tries to replicate a corrupted image which smooths locally – has neural representations that look similar:

2. The visual world in first order visual cortex

Similarly, if you want to efficiently represent the visual world (once it is denoised) you want to represent things sparsely or independently. This was shown by Olshausen and Field 1996  and Bell and Sejnowski 1997 and is equivalent to doing ICA on natural images. Note that doing PCA on natural images will give you Fourier components.

If you train a Deep Network on ImageNet (AlexNet example below), the filters on the first layer look similar:

3. The auditory world

The best representation of the auditory world is also efficiently encoded. Lewicki 2002 show that if you run ICA on acoustic data you get filters that look like nearly identical to the sounds neurons respond to (wavelet basis functions).

I have not seen a visualization of the first few layers of a neural network that classifies speech (for instance) but I would guarantee it has features that look like wavelets.

4. Spatial cells

Your sense of place in the world is encoded by a series of grid cells – which are a periodic representation of place – and place cells, which are precise locations in space. Dordek et al 2016 showed that non-negative PCA on place cells will give you grid cells. This is similar to the result that PCA on images gives you Fourier components. Note that Dordek et al also use a single-layer feedforward neural network and show that it has a similar property.

It turns out if you train a Deep recurrent network on network navigation task, you get grid cells (once you have assumed place cells).

Other

What else is left? Olfaction is a mess and doesn’t have a coherent coding principle as far as I can tell (the olfactory space is not clearly defined). Mechanosensation (touch) has been hard to define but Zhao et al 2017 can find first-order touch receptive fields with an autoencoder (like with vision). You can get CPGs (oscillatory movement generators) with recurrent neural networks by training an input signal to be associated with a particular sequence of movements. I’m struggling to think of other internal representations that are well understood.

A long-term principle in neuroscience has been that successive layers of the brain are attempting to decorrelate their responses to produce ever-finer features. Tishby and Zaslavsky 2015 suggest that a similar principle applies to Deep Networks: you have a constrained input output and networks are trying to find the representations that encode the most information between input and output given the limited bandwidth that they have (numbers of layers, numbers of units). It should not be surprising that this entails something like different forms of PCA or ICA or other signal-detection framework.

One of the nice things about Deep Networks is that you do not have to explicitly code for this in order to find these features – they are costless in a way. You can train for a particular task – a visually-driven one, a path-driven one, an acoustic-driven one – and these features will just fall out. Not only will these features fall out, but neurons which are deeper in the pathway will also have similar activity. This is a much harder problem and one in which “run PCA again” or “run ICA again” will not give a good answer to.

What other neural representations have we not yet seen in neural networks?

Behold, the behaving brain!

In my opinion, THE most important shift in neuroscience over the past few years has been the focus on how behavior changes neural function across the whole brain. Even the sensory systems – supposedly passive passers-on of perfectly produced pictures of the world – will be shifted in unique ways by behavior. An animal walking will have different responses to visual stimuli than an animal that is just sitting around. Almost certainly, other behaviors will have other effects on the animal.

A pair of papers this week have made that point rather elegantly. First, Carsen Stringer and Marius Pachitariu from the Carandini/Harris labs have gobs of data from when they were recording ~10,000 neurons simultaneously. Marius Pachitariu has an excellent twitter thread explaining the work. I just want to take one particular point from this paper which is that you can explain a surprising amount of variance in the primary visual cortex – and all across the brain – simply by looking at the movement of the animal’s face.

In the figures below, they have taken movies of an animal’s face, extracted the motion energy (roughly, how much movement there is at that location in the video), and then used PCA to find the common ways that you can describe that movement. Using this kind of common motion, they then tried to predict the activity of individual neurons – while ignoring the traditional sensory or task information that you would normally be looking at.

The other paper is from Simon Musall and Matt Kaufman in Anne Churchland’s lab. He also has a nice twitter description of their work. Here, they used a technique that is able to image the whole brain simultaneously (though I am not sure to what depth), though at the cost of resolution (individual neurons are not identifiable but are averaged together). The animals are doing a task where they need to tell the difference between two tones, or two flashes of light. You can look for the brain areas involved in choice, or the areas involved in responding to vision or audio, and they are there (choice, kind of?).  But if you look at where movement is being represented it is everywhere.

The things that you would normally look for – the amount of brain activity you can explain by an animal’s decisions or its sensory responses – explain very little unique information.
This latter point is really important. If you had looked at the data and ignored the movement, you would have certainly found neurons that were correlated with decision-making. But once you take into account movement, that correlation drops away – the decisions are too correlated with general movement variables. People need to start thinking about how much of their neural data is responding to the task the animal is doing and how much is due to movement variables that are aligned to the task. This is really important! Simple averaging will not wash away this movement.

There is a lot more to both of these papers and both will be more than worth your time to dig into.

I’m not sure if you would have noticed this effect in either case if they weren’t recording from massive numbers of neurons simultaneously. This is a brave new world of neuroscience. How do we deal with this massively complex behavioral data at the same time that we deal with massive neural populations?

In my mind, the gold standard for how to analyze this data comes from Eva Naumann and James Fitzgerald in a paper out of the Engert lab. They are analyzing data from the whole brain of the zebrafish as it fictively swims around and responds to some moving background. Rather than throwing up their hands at the complexity of this data and the number of moving pieces what they did was very precisely quantify one particular aspect of the behavior. Then they followed the circuit step by step and tried to understand how the quantified behavior was transformed in the circuit. How did the visual stimuli guide the fish’s orientation in the water? What were the different ways the retina represented that visual information? How was this transformed by the relays into the brain? How was this information further transformed in the next step? How did the motor centers generate the different types of behavior that were quantified?

The brain evolved to produce behavior. In my opinion there is no way to understand the brain – any of it – if you don’t understand the behavior that the animal is producing.

Monday Open Question: How many types of neurons are there in the brain?

How many types of neurons are in the brain? Not just number, but classes that represent some fundamental unit of computation? I tweeted an article about this a couple days ago and (justly) got pilloried for saying it counted classes in the brain rather than in two cortical regions. So what is the answer for the whole brain?

Obviously the answer depends on the brain that you are talking about. In the nematode C. elegans, we know that every hermaphrodite has 302 neurons and every male has 381. I believe these specifically male neurons get pruned in the developmental process if the animal does not become a male. These neurons tend to come in symmetric pairs or quartets, one showing up on each side of the body, so the number of neural ‘classes’ is on the order of 118 – though there is evidence that some neurons can be slightly different between their left and right side (ASEL and ASER, for example). Fruit flies (Drosophila) also show sex-specific neurons, with the genes Fruitless and Doublesex controlling whether certain neurons are masculinized or feminized. So not only are there going to be different classes of neurons in males and females, we know that there are single (or, again, symmetric) neurons that control single behaviors. On the other hand, in the fruit fly retina there are definitely distinct classes of neurons that are tiled across the eye. This should frame our thinking about the number of neural classes – there are classes with large numbers of neurons where convolution is useful (repeating the same computation across some space, such as visual or auditory or even musculature space) but perhaps neural function becomes more specific and class-less once specific outputs are needed.

The fruit fly brain may seem a bit silly, why bother comparing it to us cortical mammals? But adult Drosophila have roughly the same number of neurons as larval zebrafish, a vertebrate animal with a cerebrum that is a popular organism to study in neuroscience. So do we think that the zebrafish has just as many pre-planned neurons as Drosophila? Or is its neural structure somewhat looser, more patterned? I don’t have an answer here but I think it is worth thinking about the similarities and differences in these organisms that have similar numbers of neurons but quite different environmental and developmental needs.

Let’s turn to mammals. The area with the most well-defined number of cell classes is probably the retina. I’m not sure of the up-to-date estimates for number of cell classes but the classic description has two classes (rods/cones) in the input layer of the retina which can be further split depending on the number of colors an animal can see – for instance, humans have S, M, and L cones roughly corresponding to blue, green, and red light. This review roughly estimates that further into the eye there are two types of horizontal cells (first layer), ~12 types of bipolar cells (second layer), ~30 types of amacrine cells (third layer). From other sources we think there is something on the order of ~30 types of retinal ganglion cells, the output from the eye into the brain. Interestingly, this is roughly the same number of defined classes that we think the fruit fly has! But again, there may be species specificity here; something on the order of 95%+ of the output layer of monkey retina is a single cell class. So the eye alone has at least 80 classes of neurons and quite probably more.

The olfactory bulb is probably more complex. In mice, at least, the number of olfactory glomuleri that exist is probably on the order of one or two thousand? Though I would expect that once past this layer the classification will look more like retina or cortex – on the order of tens of subtypes.

 

Now let’s think about the cortex. The paper that inspired this post tried to estimate the number of cell classes by using single-cell RNA-sequencing in mice to identify the transcripts that are present in different cells and then attempts to cluster them into distinct sub-classes. It should be clear up front that the number of clusters you identify (1) may not be categorical but could be continuous between types of neurons and (2) may be different than if you clustered with a different method or with different types of data – functional responses, for instance. The authors in this paper make clear that they certainly find cells that look ‘intermediate’ between their clusters so whatever categories we get may not be very firm. For instance, in the following figure the size of the circles represents the number of cells they identify in a particular cluster and the thickness of the line between two circles is how many cells are intermediate between two clusters.

 

Without getting into too many details, they find that in two distinct anatomical regions they find roughly 50 inhibitory neurons that are common in their transcript types between the regions suggesting that the types of inhibition may be a common, repeated pattern across the brain. However, the types of 50 excitatory neurons were essentially unique to each of the two regions..Chuck Stevens has an interesting paper where he attempts to find lower and upper bounds on the number of possible cell classes in cortex. Let’s say that we accept the tiling principle, that the same types of cells are repeated again and again in a motif:

This argument can be extended to the neocortex. Underneath 1 mm2 of most regions of the primate cortical surface are about 105 neurons — the striate cortex is an exception with twice the number — each of which covers say 0.05 mm2 with its dendritic arbor (assumed to be 0.25 mm in diameter). Twenty neurons with dendritic arbors of this size would be required to cover a square millimetre of cortex, so the upper limit on number of cell types, if each must tile the cortex, is 105/20 = 5000, or an average of 1000 per layer. Now assume that the cortex has 10 times more neurons of each type than required to cover the cortex, a redundancy factor of 10 as guessed above for hippocampus: we thus would have about 100 neuron types per layer. If we believe there are a dozen ganglion cell types, two dozen amacrine cell types, and four dozen different kinds of inhibitory neurons in the CA1 region of hippocampus, 100 cell types per layer of neucortex seems like a reasonable number – not good news for the micromodelers.

Let’s update this estimate; we think that there may be 25 excitatory cell types per region. I don’t actually know off-hand the percentage of mouse cortex that these two regions encompass (a motor region, ALM, and a visual region, VISp) but let’s say they are roughly 10% of the cortical area each (this could be grossly wrong so feel free to correct me). We then might believe that cortex has on the order of 25 * 10 ~ 250 excitatory cell classes and ~50 inhibitory cell classes. Does this feel right? 300 classes for all of cortex?

But the cortex is the minority of the number of cells in the brain – the majority is in a single structure, the cerebellum. I don’t know of an estimate of the number of neural classes but a structure that is known for its beautiful tiling neurons seems more likely to have a fair bit of structure in its number of cell classes. What would we estimate here? Something similar to a primary sensory area, with ~50-100 cell classes? Something more something less?

And what about other subcortical regions in the brain like hypothalamus that are more directly responsible for specific behavior? Should we expect many thousands of distinct subtypes for each of the behaviors or something more patterned?

Tell me where I’m wrong.

Two views of science

The pessimist:

These quotes give you a sense of these two books, both of which build on what Alan Richardson calls “one of the great lessons of the cognitive revolution”: “just how much of mental life remains closed to introspection.” As a brief summation, the unified thesis of Nørretranders’s and Wilson’s works looks something like this: We are not really in control. Not only are we not in control, but we are not even aware of the things of which we are not in control. Our ability to judge anything with any accuracy is a lie, as is our ability to perceive these lies as lies. Consciousness masquerades as awareness and agency, but the sense of self it conjures is an illusion. We are stranded in the great opaque secret of our biology, and what we call subjectivity is a powerless epiphenomenon, sort of like a helpless rider on the back of a galloping horse—the view is great, but pulling on the reins does nothing.

If this description of reality feels familiar to you, it’s because such a neuroscientifically inspired pessimism is a quiet but powerful strain of modern thinking.

The optimists:

The beauty of a living thing is not the atoms that go into it, but the way those atoms are put together (Carl Sagan)

Poets say science takes away from the beauty of the stars – mere globs of gas atoms. I too can see the stars on a desert night, and feel them. But do I see less or more? The vastness of the heavens stretches my imagination – stuck on this carousel my little eye can catch one – million – year – old light. A vast pattern – of which I am a part… What is the pattern, or the meaning, or the why? It does not do harm to the mystery to know a little about it. For far more marvelous is the truth than any artists of the past imagined it. Why do the poets of the present not speak of it? What men are poets who can speak of Jupiter if he were a man, but if he is an immense spinning sphere of methane and ammonia must be silent? (Richard Feynman)

These are not necessarily mutually exclusive.

But I also found this Feynman poem, which I had never heard before:

…I stand at the seashore, alone, and start to think.

There are the rushing waves, mountains of molecules
Each stupidly minding its own business
Trillions apart, yet forming white surf in unison

Ages on ages, before any eyes could see
Year after year, thunderously pounding the shore as now
For whom, for what?
On a dead planet, with no life to entertain

Never at rest, tortured by energy
Wasted prodigiously by the sun, poured into space
A mite makes the sea roar

Deep in the sea, all molecules repeat the patterns
Of one another till complex new ones are formed
They make others like themselves
And a new dance starts

Growing in size and complexity
Living things, masses of atoms, DNA, protein
Dancing a pattern ever more intricate

Out of the cradle onto the dry land
Here it is standing
Atoms with consciousness, matter with curiosity
Stands at the sea, wonders at wondering

I, a universe of atoms
An atom in the universe

(This is obviously a response to one of my favorite poems, When I Have Fears That I May Cease To Be)

#Cosyne18, by the numbers

Where does the time go? Another year, another look at my favorite conference: Cosyne. Cosyne is a Computational and Systems Neuroscience conference, this year held in Denver. I think it’s useful to use it each year to assess where the field is and where it may be heading.

First is who is the most active – and this year it is Ken Harris who I dub this year’s Hierarch of Cosyne. The most active in previous years are:

  • 2004: L. Abbott/M. Meister
  • 2005: A. Zador
  • 2006: P. Dayan
  • 2007: L. Paninski
  • 2008: L. Paninski
  • 2009: J. Victor
  • 2010: A. Zador
  • 2011: L. Paninski
  • 2012: E. Simoncelli
  • 2013: J. Pillow/L. Abbott/L. Paninski
  • 2014: W. Gerstner
  • 2015: C. Brody
  • 2016: X. Wang
  • 2017: J. Pillow

If you look at the most across all of Cosyne’s history, well nothing ever changes.

Visualizing the network diagram of co-authorships reveals some of the structure in the computational neuroscience community (click for high-resolution PDF):and zooming in:

Plotting the network of the whole history of Cosyne is a mess – there are too many dense connections. Here are three other ways of looking at it. First, only plotting the superusers (people who have 20+ abstracts across Cosyne’s history, click for PDF):

Or alternately, the regulars (10+ abstracts across Cosyne’s history, click for PDF):

And, finally, the regulars + everyone they have collaborated with (click for PDF):

I’d say the long-term structure looks something like the New York Gang (green), the European Crew (purple), the High-Dimensional Deities (blue), the Ecstasy of Entropy (magenta), and some others that I can’t come up with good names for (comments welcome).

Memming asked whether the central cluster was getting more dispersed or less cliquey with time. This is kind of a hard question to answer. If you just look at how large the central connected group is over time the answer is a resounding no. The community is more cohesive and is more connected than ever before.

On the other hand, we can look within that central cluster. How tightly connected is it? If you look at mean path length – how long it takes to get from one author to another, like degrees of Kevin Bacon or an Erdos number (a Paninski number?) – then the largest cluster is becoming more dispersed. Dan Marinazzo suggested looking at the network efficiency as a metric that is more robust to size. Network efficiency is kind of the inverse of path length, where one would mean you can get from one author to another in a single step and 0 means it takes forever.

I now also have two years of segmented abstracts (both accepted and rejected). What are the most popular topics at Cosyne? I used doc2vec, a method that can take a document and embed it in a high-dimensional space that represents the semantic topics that are being used, and then visualized it with t-SNE. The Cosyne Island that you see above is the density of abstracts at each given point. I’ve given the different islands names that represent the abstracts in each of them.

If you look at the words that you see more in 2018’s accepted abstracts they are “movements”, “uncertainty”, “motion”; looks like behavior!

The rejected abstracts are “orientation”, “techniques”, “highdimensional”,”retinal”, “spontaneous” 😦

We can also look at words that are more likely to be accepted in 2018 than 2017 (which are the big gainers):

And the big losers this year versus last year:

Here is a list of the twitter accounts that will be at Cosyne.

Previous years: [201420152016, 2017]

Communication by virus

‘Some half-baked conceptual thoughts about neuroscience’ alert

In the book Snow Crash, Neil Stephenson explores a future world that is being infected by a kind of language virus. Words and ideas have power beyond their basic physical form: they have the ability to cause people to do things. They can infect you, like a song that you just can’t get out of your head. They can make you transmit them to other people. And the book supposes a language so primal and powerful it can completely and totally take you over.

Obviously that is just fiction. But communication in the biological world is complicated! It is not only about transmitting information but also about convincing them of something. Humans communicate by language and by gesture. Animals sing and hiss and hoot. Bacteria communicate by sending signaling molecules to each other. Often these signals are not just to let someone know something but also to persuade them to do something else. Buy my book, a person says; stay away from me, I’m dangerous, the rattlesnake says; come over here and help me scoop up some nutrients, a bacteria signals.

And each of these organisms are made up of smaller things also communicating with each other. Animals have brains made up of neurons and glia and other meat, and these cells talk to each other. Neurons send chemicals across synapses to signal that they have gotten some information, processed it, and just so you know here is what it computed. The signals it sends aren’t always simple. They can be exciting to another neuron or inhibiting, a kind of integrating set of pluses and minuses for the other neuron to work on. But they can also be peptides and hormones that, in the right set of other neurons, will set new machinery to work, machinery that fundamentally changes how the neuron computes. In all of these scenarios, the neuron that receives the signal has some sort of receiving protein – a receptor – that is specially designed to detect those signaling molecules.

This being biology, it turns out that the story is even more complicated than we thought. Neurons are cells and just like every other cell it has internal machinery that uses mRNAs to provide instructions for building the protein machinery needed to operate. If you need more of one thing, the neuron will synthesize more of the mRNA and transcribe it into new proteins. Roughly, the more mRNA you have the more of that protein – tiny little machines that live inside the cell – you will produce.

This synthesis and transcription is behind much of how neurons learn. The saying goes that the neurons that fire together wire together, so that when they respond to things at the same time (such as being in one location at the same time you feel sad) they will tend to strengthen the link between them to create memories. And the physical manifestation of this is transcribing proteins for a specific receptor (say) so that now the same signal will activate more receptors and result in a stronger link.

And that was pretty much the story so far. But it turns out that there is a new wrinkle to this story: neurons can directly ship mRNAs into each other in a virus-like fashion, avoiding the need for receptors altogether. There is a gene called Arc which is involved in many different pieces of the plasticity puzzle. Looking at the sequence of the gene, it turns out that there is a portion of the code that creates a virus-like structure that can encapsulate RNAs and bury through other cells’ walls. This RNA is then released into the other cell. And this mechanism works. This Arc-mediated signaling actually causes strengthening of synapses.

Who would have believed this? That the building blocks for little machines are being sent directly into another cell? If classic synaptic transmission is kind of like two cells talking, this is like just stuffing someone else’s face with food or drugs. This isn’t in the standard repertoire of how we think about communication; this is more like an intentional mind-virus.

There is this story in science about how the egg was traditionally perceived to be a passive receiver during fertilization. In reality, eggs are able to actively choose which sperm they accept – they have a choice!

The standard way to think about neurons is somewhat passive. Yes, they can excite or inhibit the neurons they communicate with but, at the end of the day, they are passively relaying whatever information they contain. This is true not only in biological neurons but also in artificial neural networks. The neuron at the other end of the system is free to do whatever it wants with that information. Perhaps a reconceptualization is in order. Are neurons more active at persuasion than we had thought before? Not just a selfish gene but selfish information from selfish neurons? Each neuron, less interested in maintaining its own information than in maintaining – directly or homeostatically – properties of the whole network? Neurons do not simply passively transmit information: they attempt to actively guide it.

2017 in review (a quantified life)

I have always found it useful to take advantage of the New Year and reflect on what I have done over the past year. The day itself is a useful bookmark in life, inevitably trapped between leaving town for Christmas and coming back to town after the New Year begins. Because of the enforced downtime, what I happen to read has a strong influence on me – last year, I hopped on the Marie Kondo craze and really did manage to do a better job of keeping clean (kind of) but more importantly organizing my clothes by rolling and folding them until the fit so perfectly in my drawers. So that was useful, I guess.

The last year has been okay. Not great, not terrible. Kind of middle-of-the-road as my life goes. There have been some big wins (organizing a fantastic workshop at Cosyne on neurobehavioral analysis and being awarded a Simons Foundation fellowship that lets me join a fantastic group of scientists) and some frustrations (mostly scientific work that goes slowly slowly slowly).

One thing that sticks out for me over this past year – over these past two years, actually – is how little time I have spent on this blog. Or rather, how little of what I have done has been published on this blog. It’s not for a lack of time! I have actually done a fair bit of writing but am constantly stuck after a paragraph or two, my motivation waning until it disappears completely. This largely due to how I responded to some structural features in my life, mostly a long commute and a lot of things that I want to accomplish.

Last year I had the “clever” idea of creating a strict regimen of hour by hour and daily goals both for work and for my life. Do this analysis from 3pm – 4pm. Debug that code from 4pm – 5pm. Play the piano from 8pm – 9pm. Things like that. Maybe this works for other people? But I end up overambitious, constantly adding things that I need to do today so much that I rapidly switch from project to project, each slot mangled into nonsense by the little new things that will always spring up on any given day. Micromanaging yourself is the worst kind of managing, especially when you don’t realize you are doing it.

This is where what I read over winter break made me think. One of the three articles that influenced me was about the nature of work:

For unlike someone devoted to the life of contemplation, a total worker takes herself to be primordially an agent standing before the world, which is construed as an endless set of tasks extending into the indeterminate future. Following this taskification of the world, she sees time as a scarce resource to be used prudently, is always concerned with what is to be done, and is often anxious both about whether this is the right thing to do now and about there always being more to do. Crucially, the attitude of the total worker is not grasped best in cases of overwork, but rather in the everyday way in which he is single-mindedly focused on tasks to be completed, with productivity, effectiveness and efficiency to be enhanced. How? Through the modes of effective planning, skilful prioritising and timely delegation. The total worker, in brief, is a figure of ceaseless, tensed, busied activity: a figure, whose main affliction is a deep existential restlessness fixated on producing the useful.

Yup, that pretty much sums up how I was trying to organize my life. In the hope of accomplishing more I ended up doing less. This year I am trying a less-is-more approach; have fewer, more achievable goals each day/month/time unit; have more unstructured time; read more widely; and so on. Instead of saying I need to learn piano and I need to make art and I need to play with arduinos and I need to memorize more poetry and finding more and more things that I need to do, just list some things I’m interested in doing. Look at that list every so often to remind myself and then allow myself to flow into the things I am most interested in rather than forcing it.

I was lucky enough in graduate school to join a lunch with Eve Marder. There are two types of scientists, she said. Starters and finishers. Some people start a lot of projects, some people finish a few. This has always stuck with me. This past year I have been trying to maximize how many things I can work on – and it turns out that is a lot of different things – I want to spend this year doing a couple things at a time and finish themDo them well.

I have this memory of Wittgenstein declaring in the Tractatus that “the purpose of the Philosopher is to clarify.” I must have confabulated that quote because I could never find it again. Still, it’s my favorite thing that Wittgenstein ever said. For a scientist, the aphorism should be that “the purpose of the Scientist is to simplify.”

There was an article in the New York Times recently from an 88-year-old man looking back on the 18 years he has lived in the millennium:

I’m trying to break other habits in far more conventional ways. As in many long marriages, my wife and I enjoy spending time with the same friends, watch the same television programs, favor the same restaurants, schedule vacations to many of the same places, avoid activities that venture too far from the familiar.

We decided to become more adventurous, shedding some of those habits. European friends of ours always seem to find the time for an afternoon coffee or glass of wine, something we never did. Now, spontaneously, one of us will suggest going to a coffee shop or cafe just to talk, and we do. It’s hardly a lifestyle revolution, but it does encourage us to examine everything we do automatically, and brings some freshness to a marriage that started when Dwight Eisenhower was elected president.

The best memories can come from unexpected experiences. The best thoughts can come from exposure to unexpected ideas. Attempting to radically organize my life has left me without those little moments where my mind wanders from topic to topic. Efficiency. I have cut back on my reading for pleasure, most of which now comes on audiobook during my commute and somehow seems to prevent deep thinking. But the reason I am interested in science in the first place is because of the questions about who we are and how we behave that come out of thinking about the things I read! The solution, again, is to remove some of the structure I am imposing on my life, simplify and force myself to let go of the need to always be doing something quantifiable and useful.

Looping back, this is where the importance of sitting down and writing, and finishing writing, is one of my big goals for the year. Because I find writing fun! And I find it the best way to really think rigorously, to explore new thoughts and new ideas. There is much less of a need to do so much, to try so many projects when I can read and think about something, writing about it to make something useful and enjoyable instead of making a huge product out of it.

I am not a Stoic but find Stoic thinking useful. Something I read over the holidays:

Let me then introduce you to three fundamental ideas of Stoicism – one theoretical, the other two practical – to explain why I’ve become what I call a secular Stoic. To begin with, the Stoics – a school of philosophers who flourished in the Greek and Roman worlds for several hundred years from the third century BCE – thought that, in order to figure out how to live our lives (what they called ethics), we need to study two other topics: physics and logic. “Physics” meant an understanding of the world, as best as human beings can grasp it, which is done by way of all the natural sciences as well as by metaphysics.

The reason that physics is considered so important is that attempting to live while adopting grossly incorrect notions about how the world works is a recipe for disaster. “Logic” meant not only formal reasoning, but also what we would today call cognitive science: if we don’t know how to use our mind correctly, including an awareness of its pitfalls, then we are not going to be in a position to live a good life.

Beyond reading and self-reflection, the best way to understand your life is to quantify it. Quantification is the best way to peer into the past and really cut through hazy memories that are full of holes. What did I really do? What did I really think? This isn’t an attempt at stricture or rigidity: it’s an attempt at radical self-knowledge. I’m a fairly active at journaling, which is the first step, but I also keep track of what I eat and how I exercise using MyFitnessPal, books I read on Goodreads, movies I watch on letterboxd, where I have been using my phone to track me, and science articles I read using Evernote (I used to be very active on yelp but somehow lost track of that). Using these tools to look back on the past year is a great experience: “Oh yeah, I loved that movie!” or “Ugh I can’t believe I read that whole book.” or just reminding myself of pleasant memories from a short trip to LA.

I’d like to expand that this year to include some other relevant data – ‘skills’ I work on like playing piano to see whether I’m actually improving, TV I watch (because maybe I watch too much, or not enough!), what music I’m listening to, where I spend my money (I already avidly keep track of the fluctuations in how much I have month-to-month), and what important experiences I have (vacations; hikes; seeing exciting new art). There don’t seem to be any good apps for these things outside of Mint, so I have assembled a giant Google Sheet for all of these categories to make it easier to access and analyze the data, with a main Sheet that I can use every month to look back and make some qualitative observations. Oh, and I’m also building a bunch of arduinos that can sense temperature, humidity, light, and sound intensity to put in different rooms of my house to log those things (mostly because my house is always either too hot or too cold and the thermostat is meaningless and I want to figure out why, and partly because I want to make sweet visualizations of the activity in my house throughout the year).

So my lists!

These are the movies I watched in 2017 and to which I gave 5 stars (no particular order):

Embrace of the Serpent
American Honey
Victoria
Gentleman’s Agreement
T2: Trainspotting
Logan
Moana
While We’re Young
Moonlight

With honorable mentions to My Life As A Zucchini, Blade Runner 2049, and Singles.

These are the books I read in 2017 and liked the most:

The Invisibility Cloak (Ge Fei)
The Wind-up Bird Chronicle (Murakami)
Ficciones (Borges)
The Stars Are Legion (Hurley)
Redshirts (Scalzi)
Red Mars (Robinson)
We Are Legion (Taylor)
Permutation City (Egan)
Neuromancer (Gibson)

I see a lot more scifi than I normally read, and many books that I have read previously.

Where was I (generated using this)?

There was an article a few years ago on the predictability of human movement. It turns out that people are pretty predictable! If you know where they are at one moment, you can guess where they will be the next. That’s not too surprising, though, is it? You’re mostly at work or at home. If you go to a bar, there is a higher than random probability that you’ll go home afterward.

The data that you can ask your Android phone to collect on you is unfortunately a bit impoverished. It doesn’t log everything you do but is biased toward times when you check your phone (lunch, when you’re the passenger in a long car ride home, etc). Still, it captures the broad features of the day.

I’ve been keeping track of the data for two years now so I downloaded the data and did a quick analysis about the entropy of my own life. How predictable is my location? If you bin the data into 1 sq. mile bins, entropy is a measure of how much uncertainty there is in where I was. 1 bit of entropy would mean you could guess where I was down to the mile with only one yes or no question; 2 bits of entropy would mean you could guess with two questions; and so on.

On any given day of the week, there are roughly 3 bits of entropy in my location (much less on weekends). But as you can see, it varies a lot by month depending on whether I am traveling or not.

In 2016 (the weird first month is because that’s when I started collecting data and only got a few days):

In 2017:

I will leave you with an image from the last thing I was reading in 2017, and which was consistently the weirdest thing I read: Battle Angel Alita.

Monday open question: can invertebrates be ‘cognitive’?

Janelia Farm, the research center the Howard Hughes Medical Institute recently announced their upcoming research focuses. One of them was controversial: mechanistic cognitive neuroscience. Here’s what they had to say about it:

How does the brain enable cognition? We are developing an integrated program in which tool-builders, biologists, and theorists collaborate to clear the technical, conceptual, and computational hurdles that have kept the most intriguing aspects of cognition beyond the purview of mechanistic investigation. The program will establish tight links across our existing genetic model systems —flies, fish, and rodents— and exploit their complementary strengths. We aim to make the fly the benchmark for reductionist explanations of neural processes underlying complex behavior, leveraging conceptual research by mammalian neuroscientists. The fly has strong potential as a model for rapid mechanistic insights, due to its small brain size, the likelihood of obtaining a complete wiring diagram of its brain, and increasingly powerful methods for measuring and manipulating genetically defined populations of cells in behaving animals. We expect this research to reveal strategies for better understanding the more sophisticated neural and behavioral features of vertebrates. In turn, we expect our vertebrate research to expose complex computational mechanisms, some of which we can study at a detailed level in the fly.

Why was this so controversial? This sentence: “In turn, we expect our vertebrate research to expose complex computational mechanisms, some of which we can study at a detailed level in the fly“. Yes, the humble fly may or may not have cognitive states.

What are some cognitive behaviors that a fly can perform? They use reinforcement learning, can attend to things, have visual place memory. Other invertebrates can recognize faces and perform complex path integration. On the other hand, they have very poor linguistic abilities.

It’s a truth of biology that theories rarely survive contact with new types of data. There is a kind of clarity from knowing the exact neural circuitry and dynamics that a minimal neural circuit needs. If I were studying, say, attention in primates I would be interested in the precise mechanisms that another species uses to accomplish a task similar to what I’m studying. There’s no guarantee that it will be the same mechanism – but is it so unreasonable? Is there a reason that insects would not display cognitive behavior?