Making MATLAB pretty

Alright all y’all haters, it’s MATLAB time.

For better or worse, MATLAB is the language that is used for scientific programming in neuroscience. But it, uh, has some issues when it comes to visualization. One major issue is the clusterfuck that is exporting graphics to vector files like eps. We have all exported a nice-looking image in MATLAB into a vectorized format that not only mangles the image but also ends up somehow needing thousands of layers, right?  Thankfully, Vy Vo pointed me to a package on github that is able to clean up these exported files.

Here is my favorite example (before, after):

If you zoom in or click the image, you can see the awful crosshatching in the before image. Even better, it goes from 11,775 layers before to just 76 after.

On top of this, gramm is a toolbox to add ggplot2-like visualization capabilities to MATLAB:

(Although personally, I like the new MATLAB default color-scheme – but these plotting functions are light-years better than the standard package.)

Update: Ben de Bivort shared his lab’s in-house preferred colormaps. I love ’em.

Update x2: Here’s another way to export your figures into eps nicely. Also, nice perceptually uniform color maps.

Why does the eye care about the nose?

The ear, the nose, the eye: all of the neurons closest to the environment are doing on thing: attempting to represent the outside world as perfectly as possible. Total perfection is not possible – you can only only make the eye so large and only need to see so much detail in order live your life. But if you were to try to predict what the neurons in the retina or the ear are doing based on what could provide as much information as possible, you’d do a really good job. Once that information is in the nervous system, the neurons that receive this information can do whatever they want with it, processing it further or turning it directly into a command to blink or jump or just stare into space.

Even though this is the story that all of us neuroscientists get told, it’s not the full thing. Awhile back, I posted that the retina receives input from other places in the brain. That just seems weird from this perspective. If the retina is focused on extracting useful information about the visual world, why would it care about how the world smells?

One simple explanation might be that the neurons only want to code for surprising information. Maybe the nose can help out with that? After all, if something is predictable then it is useless; you already know about it! No need to waste precious bits. This seems to be what the purpose of certain feedback signals to the fly eye are for. A few recent papers have shown that neurons in the eye that respond to horizontal or vertical motion receive signals about how the animal is moving, so that when the animal moves to the left it should expect leftward motion in the horizontal cells – and so only respond to leftward motion that is above and beyond what the animal is causing. But again – what could this have to do with smells?

Let’s think for a second about some times when the olfactory system uses non-olfactory information. The olfactory system should be trying to represent the smell-world as well as it can, just like the visual system is trying to represent the image-world. But the olfactory system is directly modulated depending on the needs of an animal at any given moment. For instance, a hungry fly will release a peptide that modifies how much a set of neurons that respond to particular odors can signal the rest of the brain. In other words, how hungry an animal is determines how well it can smell something!

These two stories – how the eye interacts with the motion of the body, how the nose interacts with hunger – might give us a hint about what is happening. The sensory systems aren’t just trying to represent as much information about the world as possible, they are trying to represent as much information about useful stuff as possible. The classical view of sensory systems is a fundamentally static one, that they have evolved to take advantage of the consistencies in the world to provide relevant information as efficiently as possible*. But the world is a dynamic place, and the needs of an animal at one time are different from the needs of the animal at another.

Take an example from tadpoles. When the tadpole is in a very dim environment, it has a harder time separating dark objects from the background. The world just has less contrast (try turning down the brightness on your screen and reading this – you’ll get the idea). One way that these tadpoles control their ability to increase or decrease contrast is through a neuromodulator that changes the resting potential of a cell (how responsive it is to stimuli), but only over relatively long timescales. This is not fast adaptation but slow adaptation to the changing world. The end result of this is that tadpoles are better able to see moving objects – but presumably at the expense of being worse at seeing something else. That seems like a pretty direct way of going from a need for the animal to code certain visual information more efficiently to the act of doing it. The point is not that this is driven by a direct behavioral need of the animal – I have no idea if this is due to a desire to hunt or avoid objects or what-have-you. Instead, it’s an example of how an animal could control certain information if it wanted to.

This kind of behavioral gating does occur from retinal feedback. Male zebrafish use a combination of smell and sight when they decide how they want to interact with other zebrafish. Certain olfactory neurons that respond to a chemical involved in mating signal to neurons in the retina – making certain cells more or less responsive in the same way that tadpoles control the contrast of their world (above). It appears as if the olfactory information sends a signal to the eye that either gates or enhances the visual information – the stripe detection or what-have-you – that the little fishies use when they want to court another animal.

The sensory system is not perfect. It must make trade-offs about which information is important to keep and which can be thrown away, about how much of its limited bandwidth to spend on one signal or another. A lot of the structure comes naturally from evolution, representing a long-term learning of the structure of the world. But animals have needs that fluctuate over other timescales – and may require more computation than can be provided directly in the sensory area. How else would the eye know that it is time to mate?

What this doesn’t answer is why the modulation is happening here; why not downstream?

 

* This is a major simplification, obviously, and a lot of work has been done on adaptation, etc in the retina.

 

Monday Open Question: what do you need to do to get a neuroscience job? (Updated)

Awhile back I asked for help obtaining information on people who had gotten a faculty job in 2016 – 2017. And it worked! With a lot of help, I managed to piece together a list with more than 70 people who had gotten faculty jobs during this last year! I am sure it is incomplete (I keep getting new tips as of ten minutes ago…) but it is time to discuss some of the interesting features of the data.

First, the gender ratio: there are 44 men on the list to 33 women (57%). Over at the neurorumblr, 62% of the people on the Postdoc List were men which is roughly the same proportion.

To get more data, I focused on faculty hires who had a Google Scholar profile – it made it much easier to scrape data. It was suggested that people in National Academy of Sciences or HHMI labs may have a better chance of getting a faculty job. Out of the 51 people with a Google Scholar profile, 4 were in both NAS/HHMI labs, 8 were in HHMI-only labs, and 4 were in NAS-only labs.  Only one person who as in a HHMI/NAS lab in grad school went to a non-HHMI/NAS lab. People also suggested that a prestigious fellowship (HHWF, Damon Runyon, Life Sciences, etc.) It is hard to tell, but there didn’t seem like a huge number of these people gaining a job last year.

The model organisms they use are:

(15) Humans

(13) Mouse

(6) Rat

(4) Monkey

(3) Drosophila

(3) Pure computational

+ assorted others

Where are they all from? Here is the distribution of institutions the postdocs came from (update: though see the bottom of the post for more information):

 

In case you hadn’t noticed, this is a pretty geographically-concentrated pool of institutions. Just adding up schools that are in the NYC+ area (NYC-itself, plus Yale and Princeton), the Bay Area, Greater DC (Hopkins + Janelia), and ‘those Boston schools’. I’m not sure this accurately represents the geographic distribution of neuroscientists.

What about their publications? They had a mean H-index of 11.98 (standard deviation ~ 4.21).

We always hear that “you need a Cell/Nature/Science paper in order to get a job”. 29.4% (15/51) of this pool have a first- or second-author CNS paper. 68% (35/51) have a first- or second-author Nature Neuroscience/Neuron/Nature Methods paper. 78% (40/51) have some combination of these papers. It’s possible that faculty hires have CNS papers in the pipeline, but unless every single issue of CNS is dedicated to people who just got a faculty job this probably isn’t the big deal it’s always made out to be.

There’s a broader theory that I’ve heard from several people (outlined here) that the underlying requirement is really the cumulative impact factor. I have used the metric described in the link, where the approximate impact factor is taken from first-author publications and second-author publications are discounted 75% (reviews are ignored). Here are the CIFs for all 51 candidates over the past 7 years (red is the mean):

I thought there might be a difference by model organism, but within imaginary error bars it looks roughly the same:

In terms of absolute IF of the publications, there is a clear bump in the two years prior to the candidate getting their job (though note all of the peaks in individual traces prior to that):

So far as I can tell, there is no strong signal in terms of publications that you had as a Grad Student. Basically, graduate work or lab don’t matter, except as a conduit to get a postdoc position.

To sum up: you don’t need a CNS paper, though a Nature Neuroscience/Neuron/Nature Methods paper or two is going to help you quite a bit. Publish it in the year or two before you go on the job market.

Oh, and live in New York+ or the Bay Area.

 

Update: the previous city/institution analysis was done on a subset of individuals that had Google Scholar profiles. When I used all of the data, I got this list of institutions/cities:

Updated x2:

I thought it might be interesting to see which journals people commonly co-publish in. It turns out, eh, it is kind and it isn’t kind of. For all authors, here are the journals that they have jointly published in (where links represent the fact that someone has published in both journals):

And here are the journals they have published in as first authors:


Behavioral quantification: mapping the neural substrates of behavior

A new running theme on the blog: cool uses of behavioral quantification.

One of the most exciting directions in behavioral science are the advances in behavioral quantification. Science often advances by being able to perform ever more precise measurements from ever-increasing amounts of data. Thanks to the increasing power of computers and advances in machine learning, we are now able to automatically extract massive amounts of behavioral data at a level of detail that was previously unobtainable.

A great example of this is a recently published paper out of Janelia Farm. Using an absolutely shocking 400,000 flies, the authors systematically activated small subsets of neurons and then observed what behaviors they performed. First, can you imagine a human scoring every moment of four hundred thousand animals as they behaved over fifteen minutes? That is 12.1 billion frames of data to sort through and classify.

Kristan Branson – the corresponding author on the paper – has been developing two pieces of software that allows for efficient and fast estimation of behavior. The first, Ctrax, tracks individual animals as they move around a small arena and assigns a position, an orientation, and various postural features (for instance, since they are fruit flies we can extract the angle of each wing). The second, JAABA, then uses combinations of these features, such as velocity, interfly distance, and so on, in order to identify behaviors. Users annotate videos with examples of when an animal is performing a particular behavior, and then the program will generate examples in other videos that it believes are the same behavior. An iterative back-and-forth between user and machine gradually narrows down what counts as a particular behavior and what doesn’t, eventually allowing fully-automated classification of behavior in new videos.

Then once you have this pipeline you can just stick a bunch of animals into a plate under a camera, activate said neural populations, let them do whatever they feel like doing, and get gobs and gobs of data. This allows you to understand at neural precision which neurons are responsible for any arbitrary behavior you desire. This lets you build maps – maps that help you understand where information is flowing through the brain. And since you know which of these lines are producing which behaviors, you can then go and find even more specific subsets of neurons that let you identify precise neural pathways.

Here are two examples. Flies sometimes walk backwards (moonwalking!). If you look at the image below, you can see (on the bottom) all the different neurons labeled in a fly brain that had an effect on this backward locomotion, and in the upper-right the more specific areas where the neurons are most likely located. In fruit fly brains, the bulbous protrusions where these colors are found are the eyes of the animal, with a couple flecks in the central brain.

This turns out to be incredibly accurate. Some of this moonwalking circuit was recently dissected and a set of neurons from the eye into the brain was linked to causing this behavior. The neurons (in green below) are in exactly the place you’d expect from the map above! They link to a set of neurons known as the ‘moonwalker descending neurons’ which sends signals to the nerve (spinal) cord that cause the animal to walk backwards.

Of course, sometimes it can be more complicated. When a male fly is courting a female fly, he will extend one wing and vibrate it to produce a song. Here are the neurons related to that behavior (there are a lot):

There are two key points from this quantification. First, the sheer amount and quality of data it is possible to gain access to and score these days is allowing us to have immense statistical precision on when and in which contexts behaviors are occurring. Second, the capacity to find new things is increasing because we can be increasingly agnostic to what we are looking for (so it is easier to find surprises in the data!).

References

Mapping the Neural Substrates of Behavior. Robie et al 2017.

See also: Big behavioral data: psychology, ethology and the foundations of neuroscience. Gomez-Marin et al 2014.

 

Monday Open Question: does neuroscience have anything to offer AI?

A review was published this week in Neuron by DeepMind luminary Demis Hassibis and colleagues about Neuroscience-inspired Artificial Intelligence. As one would expect from a journal called Neuron, the article was pretty positive about the use of neurons!

There have been two key concepts from neuroscience that are ubiquitous in the AI field today: Deep Learning and Reinforcement Learning. Both are very direct descendants of research from the neuroscience community. In fact, saying that Deep Learning is an outgrowth of neuroscience obscures the amount of influence neuroscience has had. It did not just gift the idea of connecting of artificial neurons together to build a fictive brain, but much more technical ideas such as convolutional neural networks that use a single function repeatedly across its input as the retina or visual cortex does; hierarchical processing in the way the brain goes from layer to layer; divisive normalization as a way to keep outputs within a reasonable and useful range. Similarly, Reinforcement Learning and all its variants have continued to expand and be developed by the cognitive community.

Sounds great! So what about more recent inspirations? Here, Hassibis &co offer up the roles of attention, episodic memory, working memory, and ‘continual learning’. But reading this, I became less inspired than morose (see this thread). Why? Well look at the example of attention. Attention comes in many forms: automatic, voluntary, bottom-up, top-down, executive, spatial, feature-based, objected-based, and more. It sometimes means a sharpening of the collection of things a neuron responds to, so instead of being active in response to an edge oriented, thisthat, or another way, it only is active when it sees an edge oriented that way. But it sometimes means a narrowing of the area in space that it responds to. Sometimes responses between neurons become more diverse (decorrelated).

But this is not really how ‘attention’ works in deep networks. All of these examples seem primarily motivated by the underlying psychology, not the biological implementation. Which is fine! But does that mean that the biology has nothing to teach us? Even at best, I am not expecting Deep Networks to converge precisely to mammalian-based neural networks, nor that everything the brain does should be useful to AI.

This leads to some normative questions: why hasn’t neuroscience contributed more, especially to Deep Learning? And should we even expect it to?

It could just be that the flow of information from neuroscience to AI  is too weak. It’s not exactly like there’s a great list of “here are all the equations that describe how we think the brain works”. If you wanted to use a more nitty-gritty implementation of attention, where would you turn? Scholarpedia? What if someone wants to move step-by-step through all the ways that visual attention contributes to visual processing? How would they do it? Answer: they would become a neuroscientist. Which doesn’t really help, time-wise. But maybe, slowly over time, these two fields will be more integrated.

More to the point, why even try? AI and neuroscience are two very different fields; one is an engineering discipline of, “how do we get this to work” and the other a scientific discipline of “why does this work”. Who is to say that anything we learn from neuroscience would even be relevant to AI? Animals are bags of meat that have a nervous system trying to solve all sorts of problems (like wiring length energy costs between neurons, physical transmission delays, the need to blood osmolality, etc) that AI has no real interest or need in including but may be fundamental to how the nervous system has evolved. Is the brain the bird to AI’s airplane, accomplishing the same job but engineered in a totally different way?

Then in the middle of writing this, a tweet came through my feed that made me think I had a lot of this wrong (I also realized I had become too fixated on ‘the present’ section of their paper and less on ‘the past’ which is only a few years old anyway).

The ‘best paper’ award at the CVPR 2017 conference went to this paper which connects blocks of layers together, passing forward information from one to the next.

That looks a lot more like what cortex looks like! Though obviously sensory systems in biology are a bit more complicated:

And the advantages? “DenseNets have several compelling advantages: they alleviate the vanishing-gradient problem, strengthen feature propagation, encourage feature reuse, and substantially reduce the number of parameters”

So are the other features of cortex useful in some way? How? How do we have to implement them to make them useful? What are the drawbacks?

Neuroscience is big and unwieldy, spanning a huge number of different fields. But most of these fields are trying to solve exactly the same problem that Deep Learning is trying to solve in very similar ways. This is an incredibly exciting opportunity – a lot of Deep Learning is essentially applied theoretical neuroscience. Which of our hypotheses about why we have attention are true? Which are useless?

The skeletal system is part of the brain, too

It seems like a fact uniformly forgotten is that the brain is a biological organ just the same as your liver or your spleen or your bones. Its goal – like every other organ – is to keep your stupid collection of cells in on piece. It is one, coherent organism. Just like any other collection of individuals, it needs to communicate in order to work together.

Many different organs are sending signals to the brain. One is your gut, which is innervated by the enteric nervous system. This “other” nervous system contains more neurons (~500 million) than the spinal cord, and about ten times as many neurons as a mouse has in its whole brain. Imagine that: living inside of you is an autonomous nervous system with sensory inputs and motor outputs.

We like to forget this. We like to point to animals like the octopus and ask, what could life be like as an animal whose nervous system is distributed across its body? Well, look in the mirror. What is it like? We have multiple autonomous nervous systems; we have computational processing spread across our body. Have you ever wondered what the ‘mind’ of your gastrointestinal system must think of the mind in the other parts of your body?

The body’s computations about what to do about the world aren’t limited to your nervous system: they are everywhere. This totality is so complete that even your very bones are participating, submitting votes about how you should be interacting with the world. Bones (apparently) secrete neurohormones that directly interact with the brain. These hormones then travel through the blood to make a small set of neurons more excitable, more ready to respond to the world. These neurons then become ready and willing to tell the rest of the brain to eat less food.

This bone-based signaling is a new finding and totally and completely surprising. I don’t recall anyone postulating a bone-brain axis before. Yet it turns out that substantial computations are performed all throughout the body that affect how we think. Animals that are hungry make decisions in a fundamentally different way, willing to become riskier and riskier.

 

A lot of this extra-brain processing is happening on much slower timescales than the fast neuronal processing in the brain: it is integrating information along much longer amounts of time. This mix of fast-and-slow processing is ubiquitous for animals; classification is fast. The body is both fast and slow.

People seem to forget that we are not one silicon instantiation of neural architecture away from replicating humans: we are meat machines.

References

 

MC4R-dependent suppression of appetite by bone-derived lipocalin 2. Nature 2017.

Please help me identify neuroscientists hired as tenure-track assistant profs in the 2016-17 faculty job season

Over at Dynamic Ecology, Jeremy Fox asked whether people could help identify recently-hired tenure-track professors in Ecology. When he did this last year, he found that 51% of North American assistant professors that were hired were women. I asked on twitter whether this would be worth doing for neuroscience and everyone seemed in favor so here goes –

If you know who was hired to fill one or more of the listed N. American assistant professor positions in neuroscience or an allied field, please email me with this information (neurorumblr@gmail.com).

I’m just going to quote him on the requirements:

I only want information that’s been made publicly available, for instance via an official announcement on a departmental website, or by someone tweeting something like “I’ve accepted a TT job at Some College, I start Aug. 1!” If you want to pass on the information that you yourself have been hired into a faculty position, that’s fine too. All you’re doing is saving me from googling publicly-available information myself to figure out who was hired for which positions. Please do not contact me to pass on confidential information, in particular confidential information about hiring that has not yet been totally finalized.

Please do not contact me with nth-hand “information” you heard through the grapevine. Not even if you’re confident it’s reliable.

I’m interested in positions at all institutions of higher education, not just research universities. Even if the position is a pure teaching position with no research duties.

 

Who cares about science?

It’s easy to say something like “you can’t put a dollar amount on the value of science” except you can, quite easily. Governments do it all the time! So how much does the US government value science? Look above and you can easily see that, adjusting for inflation, the US government cares less about science than at any time in the last twenty years. But over those twenty years, the population has grown by 20%.

Another way we could ask how much the US government values science is to look at how hard it is for a scientist to even be funded. If we look at how hard it is for a scientist to get funded to do research, you can see how devastating the cuts in funding are: the success rate has gone from 30.5% to 18.3% over twenty years. And that’s on average. How hard is it for young scientists?

The funding rate for an under-36 scientist is 3%. 3%!

I keep getting told to not worry, science is a bipartisan issue. No one wants to implement Donald Trumps’ total devastation of the science budget. But if the support is so bipartisan, why do I not feel comforted? Why has investment in science decreased no matter who has been in power? Remember these numbers when the budget is passed on Friday; that is how much the government supports you.

And all that is without getting into the even more direct attacks on science that Trump and people like the chairman of the science committee, Lamar Smith.

The retina receives signals from all over the brain, and that is kind of weird

As a neuroscientist, when I think of the retina I am trained to think of a precise set of neurons that functions like a machine, grinding out the visual basis of the world and sending it on to the brain. It operates independently of the rest of the system with the only feedback coming from muscles that move the eye around and dilate the pupils. So when someone [Philipp Berens] casually mentioned to me that yes, the retina does in fact receive signals from the brain? Well, I was floored.

I suppose I should not have been surprised. In fruit flies, there has been a steady accumulation of evidence that the brain sends signals to the eye to get it ready to compensate for any movement the animal will make. Intuitively, that makes a lot of sense. If you are trying to make sense of the visual world, of course you would want to be able to compensate for any sudden changes that you already know about.

It turns out that there is a huge mass of feedback connections from the brain to the retina in birds and mammals, something termed the centrifugal visual system. And inputs are sent via this system from both visual areas and non-visual areas (olfactory, frontal, limbic, and so on). So imagine – your eye knows about what you are smelling. Why? In order to do what?

The answer, it turns out, is that we don’t know. It sends all sorts of neurotransmitters and neuromodulators. The list of peptides it sends are long (GnRH, NPY, FMRF, VIP, etc) as is the list of regions that send feedback to the retina. It seems as if which regions send feedback to the retina is very species-specific, suggesting something about the environment each animal is in. But why?

This is a post long on questions and short on answers. It is more a reminder that the nice, feedforward systems that we have simple explanations for are really complex, multimodal systems designed to create appropriate behaviors in appropriate circumstances. Also it is a reminder to myself about how little I know about the brain, and how mistaken I am about even the simplest things…

I would love someone more knowledgable than me to pipe up and tell me something functional about what these connections do?

References

Repérant J, Médina M, Ward R, Miceli D, Kenigfest NB, Rio JP, & Vesselkin NP (2007). The evolution of the centrifugal visual system of vertebrates. A cladistic analysis and new hypotheses. Brain research reviews, 53 (1), 161-97 PMID: 17059846

Vereczki, V. The centrifugal visual system of rat. Doctoral Thesis. PDF.

Every spike matters, down to the (sub)millisecond

There was a time when the neuroscience world was consumed by the question of how individual neurons were coding information about the world. Was it in the average firing rate? Or did every precise spike matter, down to the millisecond? Was it, potentially, more complicated?

Like everything else in neuroscience, the answer was resolved in a kind of “it depends, it’s complicated” kind of way. The most important argument against the role of precise spike timing is noise. There is the potential for noise in sensory input, noise between every synapse, noise at every neuron. Why not make the system robust to this noise by taking some time average? On the other hand, if you want to respond quickly you can’t take too much time to average – you need to respond!

Much of the neural coding literature comes from sensory processing where it is easy to control the input. Once you get deeper into the brain, it becomes less clear how much of what the neuron is receiving is sensory and not some shifting mass.

The field has shifted a bit with the rise of calcium indicators which allow imaging the activity of large population of neurons at the expense of timing information. Not only does it sacrifice precise timing information but it can be hard to get connectivity information. Plus, once you start thinking about networks the nonlinear mess makes it hard to think about timing in general.

The straightforward way to decide whether a neuron is using the specific timing of each spike to mean something is to ask whether that timing contains any information. If you jitter the precise position of any given spike my 5 milliseconds, 1 millisecond, half a millisecond – does the neural code become more redundant? Does this make the response of the neuron any more random at that moment in time than it was before?

Just show an animal some movie and record from a neuron that responds to vision. Now show that movie again and again and get a sense of how that neuron responds to each frame or each new visual scene. Then the information is just how stereotyped the response is at any given moment compared to normal, how much more certain you are at that moment than any other moment. Now pick up a spike and move it over a millisecond or so. Is this within the stereotyped range? Then it probably isn’t conveying information over a millisecond. Does the response become more random? Then you’ve lost information.

But these cold statistical arguments can be unpersuasive to a lot of people. It is nice if you can see a picture and just understand. So here is the experiment: songbirds have neurons which directly control the muscles for breathing (respiration). This provides us with a very simple input/output system, where the input is the time of a spike and the output is the air pressure exerted by the muscle. What happens when we provide just a few spikes and move the precise timing of one of these spikes?

The beautiful figure above is one of those that is going directly into my bag’o’examples. What it shows is a sequence of three induced spikes (upper right) where the time of the middle spike changes. The main curves are the how the pressure changes with the different timing in spikes. You can’t get much clearer than that.

Not only does it show, quite clearly, that the precise time of a single spike matters but that it matters in a continuous fashion – almost certainly on a sub-millisecond level.

Update:

The twitter thread on this post ended up being useful, so let me clarify a few things. First, the interesting thing about this paper is not that the motor neurons can precisely control the muscle; it is that when they record the natural incoming activity, it appears to provide information on the order of ~1ms; and the over-represented patterns of spikes include the patterns in the figure above. So the point is that these motor neurons are receiving information on the scale of one millisecond and that the information in these patterns has behaviorally-relevant effects.

Some other interesting bits of discussion came up. What doesn’t use spike-timing information? Plenty of sensory systems do; I thought at first that maybe olfaction doesn’t but of course I was wrong. Here’s a hypothesis: all sensory and motor systems do (eg, everything facing the outside world). (Although, read these papers). When would you expect spike-timing to not matter? When the number of active input neurons are large and uncorrelated. Does spike timing make sense for Deep Networks where the neurons are implicitly representing firing rates? Here is a paper that breaks it down into rate and phase.

References

Srivastava KH, Holmes CM, Vellema M, Pack AR, Elemans CP, Nemenman I, & Sober SJ (2017). Motor control by precisely timed spike patterns. Proceedings of the National Academy of Sciences of the United States of America, 114 (5), 1171-1176 PMID: 28100491

Nemenman I, Lewen GD, Bialek W, & de Ruyter van Steveninck RR (2008). Neural coding of natural stimuli: information at sub-millisecond resolution. PLoS computational biology, 4 (3) PMID: 18369423