Every spike matters, down to the (sub)millisecond

There was a time when the neuroscience world was consumed by the question of how individual neurons were coding information about the world. Was it in the average firing rate? Or did every precise spike matter, down to the millisecond? Was it, potentially, more complicated?

Like everything else in neuroscience, the answer was resolved in a kind of “it depends, it’s complicated” kind of way. The most important argument against the role of precise spike timing is noise. There is the potential for noise in sensory input, noise between every synapse, noise at every neuron. Why not make the system robust to this noise by taking some time average? On the other hand, if you want to respond quickly you can’t take too much time to average – you need to respond!

Much of the neural coding literature comes from sensory processing where it is easy to control the input. Once you get deeper into the brain, it becomes less clear how much of what the neuron is receiving is sensory and not some shifting mass.

The field has shifted a bit with the rise of calcium indicators which allow imaging the activity of large population of neurons at the expense of timing information. Not only does it sacrifice precise timing information but it can be hard to get connectivity information. Plus, once you start thinking about networks the nonlinear mess makes it hard to think about timing in general.

The straightforward way to decide whether a neuron is using the specific timing of each spike to mean something is to ask whether that timing contains any information. If you jitter the precise position of any given spike my 5 milliseconds, 1 millisecond, half a millisecond – does the neural code become more redundant? Does this make the response of the neuron any more random at that moment in time than it was before?

Just show an animal some movie and record from a neuron that responds to vision. Now show that movie again and again and get a sense of how that neuron responds to each frame or each new visual scene. Then the information is just how stereotyped the response is at any given moment compared to normal, how much more certain you are at that moment than any other moment. Now pick up a spike and move it over a millisecond or so. Is this within the stereotyped range? Then it probably isn’t conveying information over a millisecond. Does the response become more random? Then you’ve lost information.

But these cold statistical arguments can be unpersuasive to a lot of people. It is nice if you can see a picture and just understand. So here is the experiment: songbirds have neurons which directly control the muscles for breathing (respiration). This provides us with a very simple input/output system, where the input is the time of a spike and the output is the air pressure exerted by the muscle. What happens when we provide just a few spikes and move the precise timing of one of these spikes?

The beautiful figure above is one of those that is going directly into my bag’o’examples. What it shows is a sequence of three induced spikes (upper right) where the time of the middle spike changes. The main curves are the how the pressure changes with the different timing in spikes. You can’t get much clearer than that.

Not only does it show, quite clearly, that the precise time of a single spike matters but that it matters in a continuous fashion – almost certainly on a sub-millisecond level.

References

Srivastava KH, Holmes CM, Vellema M, Pack AR, Elemans CP, Nemenman I, & Sober SJ (2017). Motor control by precisely timed spike patterns. Proceedings of the National Academy of Sciences of the United States of America, 114 (5), 1171-1176 PMID: 28100491

Nemenman I, Lewen GD, Bialek W, & de Ruyter van Steveninck RR (2008). Neural coding of natural stimuli: information at sub-millisecond resolution. PLoS computational biology, 4 (3) PMID: 18369423

All Watched Over By Machines Of Loving Grace

I like to think (and
the sooner the better!)
of a cybernetic meadow
where mammals and computers
live together in mutually
programming harmony
like pure water
touching clear sky.

I like to think
(right now, please!)
of a cybernetic forest
filled with pines and electronics
where deer stroll peacefully
past computers
as if they were flowers
with spinning blossoms.

I like to think
(it has to be!)
of a cybernetic ecology
where we are free of our labors
and joined back to nature,
returned to our mammal
brothers and sisters,
and all watched over
by machines of loving grace.

 

Richard Brautigan (1967)

Posted in Art

How do you keep track of all your projects?

One of the central tasks that we must perform as scientists – especially as we progress in our careers – is project management. To that end, I’ll admit that I find myself a bit overwhelmed with my projects lately. I have many different things I’m working on with many different people, and every week I seem to lose track of one or another. So I’m looking for a better method! It seems to me that the optimal method to keep track of projects would have the following characteristics:

  1. Ping me every week about any project that I have not touched
  2. Re-assess each project every week, both in terms of what I need to do and the priority for the project as a whole
  3. Split the projects into subtypes: data gathering, analysis, tool building, writing (etc).
  4. Be clear in my weekly/monthly/longer-term goals. Review these every week
  5. Some kind of social pressure to keep you on-task

Right now I use a combination of Wunderlist, Evernote, Google Calendar and Brain Focus (keep track of how much time I spend on each task with a timer)… but when I get busy with one particular project I will become monofocused and tune out the rest. Optimally, there would be some way to ping myself that I really do need to work on other things, at least a little. And it is too easy to adapt to whatever pinging mechanism the App Of The Moment is using and start ignoring it. Is it possible to get an annoying assistant/social mechanism that keeps you on task with a random strategy to prevent adaptation? IFTTT?

I asked about this on Twitter and everyone has a strong opinion on the right way to do this, and every opinion is different. They tend to split into:

  1. Have people bother you constantly
  2. Slack (only works with buy in from others)
  3. Trello and Workflowy
  4. Something called GTD
  5. Put sticky notes everywhere
  6. Github
  7. Spreadsheets with extensive notes

I’m super curious if there is a better strategy for project management. Perhaps I am not using slack correctly? Suggestions?

#Cosyne17, by the numbers (updated)

cosyne2017_thisyear

Cosyne is the Computational and Systems Neuroscience conference held every year in Salt Lake City (though – hold the presses – it is moving to Denver in 2018). It’s status as the keystone Computational and Systems Neuro conference makes it a fairly good representation of what the direction of the field is. Since I like data, here is this year’s Cosyne data dump.

First is who is the most active – and this year it is Jonathan Pillow who I dub this year’s Hierarch of Cosyne. The most active in previous years are:

  • 2004: L. Abbott/M. Meister
  • 2005: A. Zador
  • 2006: P. Dayan
  • 2007: L. Paninski
  • 2008: L. Paninski
  • 2009: J. Victor
  • 2010: A. Zador
  • 2011: L. Paninski
  • 2012: E. Simoncelli
  • 2013: J. Pillow/L. Abbott/L. Paninski
  • 2014: W. Gerstner
  • 2015: C. Brody
  • 2016: X. Wang

cosyne2017_allposters

If you look at the total number of posters across all of Cosyne’s history, Liam Paninski is and probably always will be the leader. Evidently he was so prolific in the early years that they had to institute a new rule to nerf him like some overpowered video game character.

Visualizing the network diagram of co-authors also reveals a lot of structure in the conference (click for PDF):

cosyne2017_network

And the network for the whole conference’s history is a dense mess with a soft and chewy center dominated by – you guessed it – the Paninski Posse (I am clustered into Sejnowski and Friends from my years at Salk).

cosyne2004-2017_network

People on twitter have seemed pretty excited about this data, so I will update this later with a link to a github repository.

Speaking of twitter, it is substantially more active than it has been in the past. Neuroscience Twitter keeps growing and is a great place to learn about new ideas in the field. Here is a feed of everyone that is attending that is on Twitter. Let me know if you want me to add you.

There are two events you should consider attending if you are at Cosyne: the Simons Foundation is hosting a social on Friday evening and on Saturday night there is a Hyperbolic Cosyne Party which you should RSVP to right away…!

On a personal note, I am giving a poster on the first night (I-49) and am co-organizing a workshop on Automated Methods for High-Dimensional Analysis. I hope to see you all there!

Previous years: [2014, 2015, 2016]

 

Update – I analyzed a few more things based on new data…

cosyne-institutions-2017

I was curious which institution had the most abstracts (measured by the presenting author’s institution.) Then I realized I had last year’s data:

cosyne-institutions-2016

Somehow I had not fully realized NYU was so dominant at this conference.

I also looked at which words are most enriched in accepted Cosyne abstracts:acceptedwords

Ilana said that she sees: behavior. What is enriched in rejected abstracts? Oscillations, apparently (this is a big topic of conversation so far) 😦rejectedwords

Finally, I clustered the most common words that co-occur in abstracts. The clusters?

  1. Modeling/population/activity (purple)
  2. information/sensory/task/dynamics (orange)
  3. visual/cortex/stimuli/responses (puke green)
  4. network/function (bright blue)
  5. models/using/data (pine green)

abstract-word-co-occurrence

Donald Trump is attacking the very foundations of science (cross-posted from Medium)

The beautiful thing about science is that it works. It doesn’t care if you are a Democrat or a Republican; an atheist, Christian or Muslim; rich or poor. It works. It has consistently provided the tools necessary to improve everyone’s lives. Whether that is to cure disease, to produce the computer or phone you are reading this on, or to heat your home, science works. There have always been two key to foundations that science is built on: scientific data and people. Donald Trump is attacking both of these.

Although we are lumped into categories — biologist, physicist, ecologist — there are very few real silos between fields. Science is a chaotic, swirling mess of ideas that get passed around as we attempt to explain the world. I am a neuroscientist. But, more importantly, I am a scientist. In my field, some of the most influential tools have come from studying how jellyfish glow in the dark and how bacteria survive in salty environments. We take ideas about how magnets align with each other and use them to explain how masses of brain cells are able to work together to perceive the world. I read papers from physicists, from computer scientists, from ecologists and apply this directly to problems of how brains are able to make decisions and communicate with each other.

At the heart of all this is data. Data is not Democrat or Republican, liberal or conservative. Data is. When scientists hear that the Environmental Protection Agency (EPA) is stricken from communicating, that data and studies must be approved by political appointees — no, quick, walk that back, data on the EPA website is being reviewed by political appointees — we don’t hear a focused attack on climate change. We hear an attack on the fundamental basis of science. The EPA does not simply research climate change, but funds research on health, on ecosystems, on chemistry, and more. When I scroll through the list of research I see many studies that could help neuroscience and medicine. But how would I know what to trust, what data has been allowed and what has not? An attack on the EPA’s ability to produce data is an attack on all of science.

The more insidious attack on science is on its people. Trump recently announced that visas from certain countries would not be renewable. One of these countries is Iran, one of the largest producers of scientific minds in the world. And they come to the USA! And want to live here and contribute to the scientific enterprise. Because we really do recruit the best minds, and they get here and they want to stay.

There is an important story as to how America became the scientific powerhouse that it is, especially in Physics. Prior to World War 2, the language of science was a mix of French, German and English. But as it became clear that more and more people were unacceptable in Europe, some of the greatest physicists in the world moved to America. Einstein, Bohr, Fermi and Pauli. And after the war, more and more scientists poured into America: Wernher von Braun led the team that launched America’s first satellite and America’s moon landings. And so, because America took in the best scientists in the world, America became the biggest and best producer of science in the world. And it continues that dominance because this is where the best research is done and it is where people want to be.

But these days other countries do great science, too. What happens to the Iranians who want to come to the US to do a PhD? They can’t anymore. What happens to the Iranians already in the US who wanted to stay here and build their lives here? They left Iran for a reason. But do they want to stay in America anymore? I cannot count the number of times I have already heard from my Iranian friends, “I should have taken the job in Europe.” And it is not just them. Everyone who has a visa is worried about the new fickleness of the system. Who knows who is next?

One thing is clear: Donald Trump is attacking American science. Donald Trump is attacking the very foundations that science in this country is built on. He is not attacking faceless enemies, he is attacking our very real friends and colleagues. These attacks are so bad that even the most introverted scientists are gearing up to march. This is not about Republicans or Democrats. This is about Donald Trump’s war on science.

(Cross-posted from Medium)

tl;dr bullet points:

1. There are two key foundations that science is built on: scientific data and people. Donald Trump is attacking both of these

2. Science is about data. Data is not Democrat or Republican, liberal or conservative.

3. The attack on the EPA is an attack on all science. Data collected in every field is used by a huge number of OTHER fields

4. The EPA does not simply fund climate change, but also funds research on health, on ecosystems, on chemistry, and more

5. The more insidious attack on science is on its people

6. Remember that the reason America is a scientific powerhouse is because all the best researchers in the world wanted to come here during and after WW2

7. Number of times I have already heard great Iranian scientists in the US say “I should have gone to Europe” is saddening

8. The halting of all visas to Iran etc sends a message to ALL foreign scientists who might otherwise come here

9. This is great for Europe and China, terrible for USA

10. Donald Trump is not attacking faceless enemies, he is attacking our friends and colleagues

11. As both an American and a scientist, I am so, so angry at what he is doing: attacking the very foundations of science in this country

12. You know things are bad when even the most introverted scientists want to march! When was the last time THAT happened?

Papers for the week, 1/1 edition

Visual projection neurons in the Drosophila lobula link feature detection to distinct behavioral programs. Ming Wu, Aljoscha Nern, W. Ryan Williamson, Mai M Morimoto, Michael B Reiser, Gwyneth M Card, Gerald M Rubin.

The hippocampus as a predictive map. Kimberly Lauren Stachenfeld, Matthew M Botvinick, Samuel J Gershman.

Searching for Signatures of Brain Maturity: What Are We Searching For? Leah H. Somerville.

The misleading narrative of the canonical faculty productivity trajectory. Samuel F. Way, Allison C. Morgan, Aaron Clauset, Daniel B. Larremore.

Contribution of Head Shadow and Pinna Cues to Chronic Monaural Sound Localization. Marc M. Van Wanrooij and A. John Van Opstal.

The influence of pinnae‐based spectral cues on sound localization. Alan D. Musicant and Robert A. Butler.

Everyday bat vocalizations contain information about emitter, addressee, context, and behavior. Yosef Prat, Mor Taub & Yossi Yovel.

A mixture of sparse coding models explaining properties of face neurons related to holistic and parts-based processing.Haruo Hosoya, Aapo Hyvärinen.

cGAL, a temperature-robust GAL4–UAS system for Caenorhabditis elegans. Han Wang, Jonathan Liu, Shahla Gharib, Cynthia M Chai, Erich M Schwarz, Navin Pokala & Paul W Sternberg.

Genome-wide analyses for personality traits identify six genomic loci and show correlations with psychiatric disorders. Min-Tzu Lo, David A Hinds, Joyce Y Tung, Carol Franz, Chun-Chieh Fan, Yunpeng Wang, Olav B Smeland, Andrew Schork, Dominic Holland, Karolina Kauppi, Nilotpal Sanyal, Valentina Escott-Price, Daniel J Smith, Michael O’Donovan, Hreinn Stefansson, Gyda Bjornsdottir, Thorgeir E Thorgeirsson, Kari Stefansson, Linda K McEvoy, Anders M Dale, Ole A Andreassen & Chi-Hua Chen.

 

Neurogastronomy, neuroenology, neuroneuroscience – does it actually tell us anything?

I should think that we are all pretty well aware of the trend to neuroify pretty much everything (neuroaesthetics, neuroeconomics, uh neuroecology). In a review of Gordon Shepherd’s book Neuroenology: How the Brain Creates the Taste of Wine, Steven Shapin spends some time ruminating on whether there is any actual use to all of this.

First, some comments on the ‘neuroenology’:

The distinctions between olfaction and gustation, and even between orthonasal and retronasal olfaction, are only a start. There are many more scientific facts to be understood about, for example, how wine moves around in the buccal cavity and then on to the pharynx and esophagus; how these muscle- and gravity-induced movements contribute to sensory experience; how swallowing is controlled by the sCPG (the swallowing central pattern generator); how swallowed wine leaves behind in the mouth and pharynx both a sticky “matrix” and “volatiles” which can be released when post-swallow respiration resumes; how the first expiration of breath after swallowing has the highest concentration of volatiles, which some tasters call “the aroma burst” and which they consider “the strongest contributor to the taste of wine”; how the nerves of the tongue and nasal epithelium are arrayed and what paths they take to the brain; how and where the various sensory modes are integrated into the experience of flavor; and how some aroma molecules come to elicit olfactory responses…

But: does it actually change how we perceive wine? Can it be used to broaden or deepen our appreciation for wine (or food in general)?

So does any of this newly acquired “objective” knowledge about sensory modes bear at all on the nature and quality of subjective experience? Yes, it may, and no anti-reductionist humanist should feel obliged to deny that. Nevertheless, some claims for the aesthetic significance of scientific knowledge seem dubious. For example, Shepherd writes about the importance of the mucus membranes in the mouth, assuring us that “being aware of the structure of the mucus membranes, their various receptors, and the sensations they produce will enrich the wine-tasting experience.” But other neuroscientific stories seem more plausibly experience-changing. Scientists’ accounts of the retronasal pathway, for example, have the capacity to alter the attention paid to different types of olfactory experience. Our senses engage with a field of potential experience: we can attend to some features of that field and not to others, making some sensible aspects part of our focal awareness, and backgrounding or bracketing others. Having a “private” conversation in a public room, we focus on our partner’s talk and not on the booming, buzzing “background” sound washing over us. Then we overhear someone mentioning our name and we realize that the background noise has been waiting to be turned into signal through a change in attentiveness…

Michael Baxandall’s marvelous accounts of what he called “the period eye” in Quattrocento painting told us how late medieval people looked at paintings — the eye attending to the areas of azure and gold in the Virgin’s clothing, guided there because of the known preciousness and expense of these pigments, just as the Quattrocento period eye attended to certain shapes because of the widely distributed mercantile skill in gauging the internal volumes of barrels from their visible surfaces. Knowing this, you can look at paintings in this way too. The French sociologist Antoine Hennion — also a wine lover — has proposed a “sociology of attention” in which features of the sensory field are framed, parsed, and differently stressed, and in which subjects momentarily make themselves passive with respect to the sensed object. (“Ah, yes, now I notice that.”) So the framing impulses that can change or enhance sense experience need not derive from sensory science, and in these cases they do not, but sensory framing may come from scientific accounts of the structures and processes of sense perception. Neuroenology relates several stories that do have the capacity to change — to reframe, to reconstitute — our sensory experience. It’s an authentic debt that some pleasures might owe to some scientific accounts…

Then there are neuroscientific accounts of what areas of the brain “light up” in functional magnetic resonance imaging (fMRI) when laboratory animals sniff different volatile substances. Neuroscientists also tell us that when you — but not, in this case, laboratory animals — are told that one of two wines you’re drinking costs more, even when the wines are in fact the same, a different area of the cortex lights up for the “expensive” wine, and does so more brightly. Yet both of these findings bear as much relationship to the experience of aroma as knowing the location of the fuel pump does to the experience of driving a car: the pump and the brain area relating to odor have got to be somewhere, but knowing where they are doesn’t add to, subtract from, or change the experiences of driving or drinking.

Finally, a note on neuromania:

[T]he Italian psychologists Paolo Legrenzi and Carlo Umiltà have called “neuromania.” This is the tendency to go beyond identifying the neural bases for beliefs and sensations to the claim that beliefs and sensations really are their neural bases. The first claim is unexceptionable: of course, sensations are the result of interactions between our neural structures and things in the world and elsewhere in our bodies. In this sense, neuroscience has begotten a set of pleonasms — using more words than necessary to convey a specific meaning — and these pleonasms have metastasized through contemporary culture. Insofar as our mental life is neurally based — and who now doubts that? – neuro-whatever might just be a potentially useful way of reminding us of this fact: “neuroaesthetics” is aesthetics; “neuroethics” is ethics; “neuromarketing” is marketing; “neuroeconomics” is economics — even if traditional practitioners of aesthetics, ethics, and the like have not routinely had much to say about which areas of the brain “light up” when we see a beautiful painting, do a good deed, or buy a new car, and provided that we appreciate that what “goes on in the brain” includes what people know, remember, feel, and feel to be worth their attention.

 

Yeah, but what has ML ever done for neuroscience?

This question has been going round the neurotwitters over the past day or so.

Let’s limit ourselves to ideas that came from machine learning that have had an influence on neural implementation in the brain. Physics doesn’t count!

  • Reinforcement learning is always my go-to though we have to remember the initial connection from neuroscience! In Sutton and Barto 1990, they explicitly note that “The TD model was originally developed as a neuron like unit for use in adaptive networks”. There is also the obvious connection the the Rescorla-Wagner model of Pavlovian conditioning. But the work to show dopamine as prediction error is too strong to ignore.
  • ICA is another great example. Tony Bell was specifically thinking about how neurons represent the world when he developed the Infomax-based ICA algorithm (according to a story from Terry Sejnowski). This obviously is the canonical example of V1 receptive field construction
    • Conversely, I personally would not count sparse coding. Although developed as another way of thinking about V1 receptive fields, it was not – to my knowledge – an outgrowth of an idea from ML.
  • Something about Deep Learning for hierarchical sensory representations, though I am not yet clear on what the principal is that we have learned. Progressive decorrelation through hierarchical representations has long been the canonical view of sensory and systems neuroscience. Just see the preceding paragraph! But can we say something has flowed back from ML/DL? From Yemins and DiCarlo (and others), can we say that maximizing the output layer is sufficient to get similar decorrelation as the nervous system?

And yet… what else? Bayes goes back to Helmholtz, in a way, and at least precedes “machine learning” as a field. Are there examples of the brain implementing…. an HMM? t-SNE? SVMs? Discriminant analysis (okay, maybe this is another example)?

My money is on ideas from Deep Learning filtering back into neuroscience – dropout and LSTMs and so on – but I am not convinced they have made a major impact yet.

“Firing,” by d. m. kingsford

pickatopic

I was roaming the streets of Denver during an ultra-long layover on Friday and ran into someone offering to write poems on the spot, on any topic. The topic: brains, neurons.

Merry Christmas, neuroscience community:

Firing by d.m. kingsford

Like a V-10,000,000
this thing, ordinary enough,
comprised of the same stuff
as everyone else’s,
making up a man of average intelligence,
kind, occasionally
(but his in-laws think he’s a
bastard)
and basically fulfilled.

this thing is firing
on all cylinders, heat beat,
renal systems in check,
temperature ok, and
at this moment,
the frontal lobe bearing down on
a crossword puzzle.

The same as Stephen Hawking.
Basically.

(Apologies for the loss of formatting.)

img_20161223_130803

Studying the brain at the mesoscale

It i snot entirely clear that we are going about studying the brain in the right way. Zachary Mainen, Michael Häusser and Alexandre Pouget have an alternative to our current focus on (relatively) small groups of researchers focusing on their own idiosyncratic questions:

We propose an alternative strategy: grass-roots collaborations involving researchers who may be distributed around the globe, but who are already working on the same problems. Such self-motivated groups could start small and expand gradually over time. But they would essentially be built from the ground up, with those involved encouraged to follow their own shared interests rather than responding to the strictures of funding sources or external directives…

Some sceptics point to the teething problems of existing brain initiatives as evidence that neuroscience lacks well-defined objectives, unlike high-energy physics, mathematics, astronomy or genetics. In our view, brain science, especially systems neuroscience (which tries to link the activity of sets of neurons to behaviour) does not want for bold, concrete goals. Yet large-scale initiatives have tended to set objectives that are too vague and not realistic, even on a ten-year timescale.

Here are the concrete steps they suggest in order to from a successful ‘mesoscale’ project:

  1. Focus on a single brain function.
  2. Combine experimentalists and theorists.
  3. Standardize tools and methods.
  4. Share data.
  5. Assign credit in new ways.

Obviously, I am comfortable living on the internet a little more than the average person. But with the tools that are starting to proliferate for collaborations – Slack, github, and Skype being the most frequently used right now – there is really very little reason for collaborations to extend beyond neighboring labs.

The real difficulties are two-fold. First, you must actually meet your collaborators at some point! Generating new ideas for a collaboration rarely happens without the kind of spontaneous discussions that arise when physically meeting people. When communities are physically spread out or do not meet in a single location, this can happen less than you would want. If nothing else, this proposal seems like a call for attending more conferences!

Second is the ad-hoc way data is collected. Calls for standardized datasets have been around about as long as there has been science to collaborate on and it does not seem like the problem is being solved any time soon. And even when datasets have been standardized, the questions that they had been used for may be too specific to be of much utility to even closely-related researchers. This is why I left the realm of pure theory and became an experimentalist as well. Theorists are rarely able to convince experimentalists to take the time out of their experiments to test some wild new theory.

But these mesoscale projects really are the future. They are a way for scientists to be more than the sum of their parts, and to be part of an exciting community that is larger than one or two labs! Perhaps a solid step in this direction would be to utilize the tools that are available to initiate conversations within the community. Twitter does this a little, but where are the foraging Slack chats? Or amygdala, PFC, or evidence-accumulation communities?