Monday Open Question: does neuroscience have anything to offer AI?

A review was published this week in Neuron by DeepMind luminary Demis Hassibis and colleagues about Neuroscience-inspired Artificial Intelligence. As one would expect from a journal called Neuron, the article was pretty positive about the use of neurons!

There have been two key concepts from neuroscience that are ubiquitous in the AI field today: Deep Learning and Reinforcement Learning. Both are very direct descendants of research from the neuroscience community. In fact, saying that Deep Learning is an outgrowth of neuroscience obscures the amount of influence neuroscience has had. It did not just gift the idea of connecting of artificial neurons together to build a fictive brain, but much more technical ideas such as convolutional neural networks that use a single function repeatedly across its input as the retina or visual cortex does; hierarchical processing in the way the brain goes from layer to layer; divisive normalization as a way to keep outputs within a reasonable and useful range. Similarly, Reinforcement Learning and all its variants have continued to expand and be developed by the cognitive community.

Sounds great! So what about more recent inspirations? Here, Hassibis &co offer up the roles of attention, episodic memory, working memory, and ‘continual learning’. But reading this, I became less inspired than morose (see this thread). Why? Well look at the example of attention. Attention comes in many forms: automatic, voluntary, bottom-up, top-down, executive, spatial, feature-based, objected-based, and more. It sometimes means a sharpening of the collection of things a neuron responds to, so instead of being active in response to an edge oriented, thisthat, or another way, it only is active when it sees an edge oriented that way. But it sometimes means a narrowing of the area in space that it responds to. Sometimes responses between neurons become more diverse (decorrelated).

But this is not really how ‘attention’ works in deep networks. All of these examples seem primarily motivated by the underlying psychology, not the biological implementation. Which is fine! But does that mean that the biology has nothing to teach us? Even at best, I am not expecting Deep Networks to converge precisely to mammalian-based neural networks, nor that everything the brain does should be useful to AI.

This leads to some normative questions: why hasn’t neuroscience contributed more, especially to Deep Learning? And should we even expect it to?

It could just be that the flow of information from neuroscience to AI  is too weak. It’s not exactly like there’s a great list of “here are all the equations that describe how we think the brain works”. If you wanted to use a more nitty-gritty implementation of attention, where would you turn? Scholarpedia? What if someone wants to move step-by-step through all the ways that visual attention contributes to visual processing? How would they do it? Answer: they would become a neuroscientist. Which doesn’t really help, time-wise. But maybe, slowly over time, these two fields will be more integrated.

More to the point, why even try? AI and neuroscience are two very different fields; one is an engineering discipline of, “how do we get this to work” and the other a scientific discipline of “why does this work”. Who is to say that anything we learn from neuroscience would even be relevant to AI? Animals are bags of meat that have a nervous system trying to solve all sorts of problems (like wiring length energy costs between neurons, physical transmission delays, the need to blood osmolality, etc) that AI has no real interest or need in including but may be fundamental to how the nervous system has evolved. Is the brain the bird to AI’s airplane, accomplishing the same job but engineered in a totally different way?

Then in the middle of writing this, a tweet came through my feed that made me think I had a lot of this wrong (I also realized I had become too fixated on ‘the present’ section of their paper and less on ‘the past’ which is only a few years old anyway).

The ‘best paper’ award at the CVPR 2017 conference went to this paper which connects blocks of layers together, passing forward information from one to the next.

That looks a lot more like what cortex looks like! Though obviously sensory systems in biology are a bit more complicated:

And the advantages? “DenseNets have several compelling advantages: they alleviate the vanishing-gradient problem, strengthen feature propagation, encourage feature reuse, and substantially reduce the number of parameters”

So are the other features of cortex useful in some way? How? How do we have to implement them to make them useful? What are the drawbacks?

Neuroscience is big and unwieldy, spanning a huge number of different fields. But most of these fields are trying to solve exactly the same problem that Deep Learning is trying to solve in very similar ways. This is an incredibly exciting opportunity – a lot of Deep Learning is essentially applied theoretical neuroscience. Which of our hypotheses about why we have attention are true? Which are useless?

The skeletal system is part of the brain, too

It seems like a fact uniformly forgotten is that the brain is a biological organ just the same as your liver or your spleen or your bones. Its goal – like every other organ – is to keep your stupid collection of cells in on piece. It is one, coherent organism. Just like any other collection of individuals, it needs to communicate in order to work together.

Many different organs are sending signals to the brain. One is your gut, which is innervated by the enteric nervous system. This “other” nervous system contains more neurons (~500 million) than the spinal cord, and about ten times as many neurons as a mouse has in its whole brain. Imagine that: living inside of you is an autonomous nervous system with sensory inputs and motor outputs.

We like to forget this. We like to point to animals like the octopus and ask, what could life be like as an animal whose nervous system is distributed across its body? Well, look in the mirror. What is it like? We have multiple autonomous nervous systems; we have computational processing spread across our body. Have you ever wondered what the ‘mind’ of your gastrointestinal system must think of the mind in the other parts of your body?

The body’s computations about what to do about the world aren’t limited to your nervous system: they are everywhere. This totality is so complete that even your very bones are participating, submitting votes about how you should be interacting with the world. Bones (apparently) secrete neurohormones that directly interact with the brain. These hormones then travel through the blood to make a small set of neurons more excitable, more ready to respond to the world. These neurons then become ready and willing to tell the rest of the brain to eat less food.

This bone-based signaling is a new finding and totally and completely surprising. I don’t recall anyone postulating a bone-brain axis before. Yet it turns out that substantial computations are performed all throughout the body that affect how we think. Animals that are hungry make decisions in a fundamentally different way, willing to become riskier and riskier.

 

A lot of this extra-brain processing is happening on much slower timescales than the fast neuronal processing in the brain: it is integrating information along much longer amounts of time. This mix of fast-and-slow processing is ubiquitous for animals; classification is fast. The body is both fast and slow.

People seem to forget that we are not one silicon instantiation of neural architecture away from replicating humans: we are meat machines.

References

 

MC4R-dependent suppression of appetite by bone-derived lipocalin 2. Nature 2017.

Please help me identify neuroscientists hired as tenure-track assistant profs in the 2016-17 faculty job season

Over at Dynamic Ecology, Jeremy Fox asked whether people could help identify recently-hired tenure-track professors in Ecology. When he did this last year, he found that 51% of North American assistant professors that were hired were women. I asked on twitter whether this would be worth doing for neuroscience and everyone seemed in favor so here goes –

If you know who was hired to fill one or more of the listed N. American assistant professor positions in neuroscience or an allied field, please email me with this information (neurorumblr@gmail.com).

I’m just going to quote him on the requirements:

I only want information that’s been made publicly available, for instance via an official announcement on a departmental website, or by someone tweeting something like “I’ve accepted a TT job at Some College, I start Aug. 1!” If you want to pass on the information that you yourself have been hired into a faculty position, that’s fine too. All you’re doing is saving me from googling publicly-available information myself to figure out who was hired for which positions. Please do not contact me to pass on confidential information, in particular confidential information about hiring that has not yet been totally finalized.

Please do not contact me with nth-hand “information” you heard through the grapevine. Not even if you’re confident it’s reliable.

I’m interested in positions at all institutions of higher education, not just research universities. Even if the position is a pure teaching position with no research duties.

 

Who cares about science?

It’s easy to say something like “you can’t put a dollar amount on the value of science” except you can, quite easily. Governments do it all the time! So how much does the US government value science? Look above and you can easily see that, adjusting for inflation, the US government cares less about science than at any time in the last twenty years. But over those twenty years, the population has grown by 20%.

Another way we could ask how much the US government values science is to look at how hard it is for a scientist to even be funded. If we look at how hard it is for a scientist to get funded to do research, you can see how devastating the cuts in funding are: the success rate has gone from 30.5% to 18.3% over twenty years. And that’s on average. How hard is it for young scientists?

The funding rate for an under-36 scientist is 3%. 3%!

I keep getting told to not worry, science is a bipartisan issue. No one wants to implement Donald Trumps’ total devastation of the science budget. But if the support is so bipartisan, why do I not feel comforted? Why has investment in science decreased no matter who has been in power? Remember these numbers when the budget is passed on Friday; that is how much the government supports you.

And all that is without getting into the even more direct attacks on science that Trump and people like the chairman of the science committee, Lamar Smith.

The retina receives signals from all over the brain, and that is kind of weird

As a neuroscientist, when I think of the retina I am trained to think of a precise set of neurons that functions like a machine, grinding out the visual basis of the world and sending it on to the brain. It operates independently of the rest of the system with the only feedback coming from muscles that move the eye around and dilate the pupils. So when someone [Philipp Berens] casually mentioned to me that yes, the retina does in fact receive signals from the brain? Well, I was floored.

I suppose I should not have been surprised. In fruit flies, there has been a steady accumulation of evidence that the brain sends signals to the eye to get it ready to compensate for any movement the animal will make. Intuitively, that makes a lot of sense. If you are trying to make sense of the visual world, of course you would want to be able to compensate for any sudden changes that you already know about.

It turns out that there is a huge mass of feedback connections from the brain to the retina in birds and mammals, something termed the centrifugal visual system. And inputs are sent via this system from both visual areas and non-visual areas (olfactory, frontal, limbic, and so on). So imagine – your eye knows about what you are smelling. Why? In order to do what?

The answer, it turns out, is that we don’t know. It sends all sorts of neurotransmitters and neuromodulators. The list of peptides it sends are long (GnRH, NPY, FMRF, VIP, etc) as is the list of regions that send feedback to the retina. It seems as if which regions send feedback to the retina is very species-specific, suggesting something about the environment each animal is in. But why?

This is a post long on questions and short on answers. It is more a reminder that the nice, feedforward systems that we have simple explanations for are really complex, multimodal systems designed to create appropriate behaviors in appropriate circumstances. Also it is a reminder to myself about how little I know about the brain, and how mistaken I am about even the simplest things…

I would love someone more knowledgable than me to pipe up and tell me something functional about what these connections do?

References

Repérant J, Médina M, Ward R, Miceli D, Kenigfest NB, Rio JP, & Vesselkin NP (2007). The evolution of the centrifugal visual system of vertebrates. A cladistic analysis and new hypotheses. Brain research reviews, 53 (1), 161-97 PMID: 17059846

Vereczki, V. The centrifugal visual system of rat. Doctoral Thesis. PDF.

Every spike matters, down to the (sub)millisecond

There was a time when the neuroscience world was consumed by the question of how individual neurons were coding information about the world. Was it in the average firing rate? Or did every precise spike matter, down to the millisecond? Was it, potentially, more complicated?

Like everything else in neuroscience, the answer was resolved in a kind of “it depends, it’s complicated” kind of way. The most important argument against the role of precise spike timing is noise. There is the potential for noise in sensory input, noise between every synapse, noise at every neuron. Why not make the system robust to this noise by taking some time average? On the other hand, if you want to respond quickly you can’t take too much time to average – you need to respond!

Much of the neural coding literature comes from sensory processing where it is easy to control the input. Once you get deeper into the brain, it becomes less clear how much of what the neuron is receiving is sensory and not some shifting mass.

The field has shifted a bit with the rise of calcium indicators which allow imaging the activity of large population of neurons at the expense of timing information. Not only does it sacrifice precise timing information but it can be hard to get connectivity information. Plus, once you start thinking about networks the nonlinear mess makes it hard to think about timing in general.

The straightforward way to decide whether a neuron is using the specific timing of each spike to mean something is to ask whether that timing contains any information. If you jitter the precise position of any given spike my 5 milliseconds, 1 millisecond, half a millisecond – does the neural code become more redundant? Does this make the response of the neuron any more random at that moment in time than it was before?

Just show an animal some movie and record from a neuron that responds to vision. Now show that movie again and again and get a sense of how that neuron responds to each frame or each new visual scene. Then the information is just how stereotyped the response is at any given moment compared to normal, how much more certain you are at that moment than any other moment. Now pick up a spike and move it over a millisecond or so. Is this within the stereotyped range? Then it probably isn’t conveying information over a millisecond. Does the response become more random? Then you’ve lost information.

But these cold statistical arguments can be unpersuasive to a lot of people. It is nice if you can see a picture and just understand. So here is the experiment: songbirds have neurons which directly control the muscles for breathing (respiration). This provides us with a very simple input/output system, where the input is the time of a spike and the output is the air pressure exerted by the muscle. What happens when we provide just a few spikes and move the precise timing of one of these spikes?

The beautiful figure above is one of those that is going directly into my bag’o’examples. What it shows is a sequence of three induced spikes (upper right) where the time of the middle spike changes. The main curves are the how the pressure changes with the different timing in spikes. You can’t get much clearer than that.

Not only does it show, quite clearly, that the precise time of a single spike matters but that it matters in a continuous fashion – almost certainly on a sub-millisecond level.

Update:

The twitter thread on this post ended up being useful, so let me clarify a few things. First, the interesting thing about this paper is not that the motor neurons can precisely control the muscle; it is that when they record the natural incoming activity, it appears to provide information on the order of ~1ms; and the over-represented patterns of spikes include the patterns in the figure above. So the point is that these motor neurons are receiving information on the scale of one millisecond and that the information in these patterns has behaviorally-relevant effects.

Some other interesting bits of discussion came up. What doesn’t use spike-timing information? Plenty of sensory systems do; I thought at first that maybe olfaction doesn’t but of course I was wrong. Here’s a hypothesis: all sensory and motor systems do (eg, everything facing the outside world). (Although, read these papers). When would you expect spike-timing to not matter? When the number of active input neurons are large and uncorrelated. Does spike timing make sense for Deep Networks where the neurons are implicitly representing firing rates? Here is a paper that breaks it down into rate and phase.

References

Srivastava KH, Holmes CM, Vellema M, Pack AR, Elemans CP, Nemenman I, & Sober SJ (2017). Motor control by precisely timed spike patterns. Proceedings of the National Academy of Sciences of the United States of America, 114 (5), 1171-1176 PMID: 28100491

Nemenman I, Lewen GD, Bialek W, & de Ruyter van Steveninck RR (2008). Neural coding of natural stimuli: information at sub-millisecond resolution. PLoS computational biology, 4 (3) PMID: 18369423

All Watched Over By Machines Of Loving Grace

I like to think (and
the sooner the better!)
of a cybernetic meadow
where mammals and computers
live together in mutually
programming harmony
like pure water
touching clear sky.

I like to think
(right now, please!)
of a cybernetic forest
filled with pines and electronics
where deer stroll peacefully
past computers
as if they were flowers
with spinning blossoms.

I like to think
(it has to be!)
of a cybernetic ecology
where we are free of our labors
and joined back to nature,
returned to our mammal
brothers and sisters,
and all watched over
by machines of loving grace.

 

Richard Brautigan (1967)

Posted in Art

How do you keep track of all your projects?

One of the central tasks that we must perform as scientists – especially as we progress in our careers – is project management. To that end, I’ll admit that I find myself a bit overwhelmed with my projects lately. I have many different things I’m working on with many different people, and every week I seem to lose track of one or another. So I’m looking for a better method! It seems to me that the optimal method to keep track of projects would have the following characteristics:

  1. Ping me every week about any project that I have not touched
  2. Re-assess each project every week, both in terms of what I need to do and the priority for the project as a whole
  3. Split the projects into subtypes: data gathering, analysis, tool building, writing (etc).
  4. Be clear in my weekly/monthly/longer-term goals. Review these every week
  5. Some kind of social pressure to keep you on-task

Right now I use a combination of Wunderlist, Evernote, Google Calendar and Brain Focus (keep track of how much time I spend on each task with a timer)… but when I get busy with one particular project I will become monofocused and tune out the rest. Optimally, there would be some way to ping myself that I really do need to work on other things, at least a little. And it is too easy to adapt to whatever pinging mechanism the App Of The Moment is using and start ignoring it. Is it possible to get an annoying assistant/social mechanism that keeps you on task with a random strategy to prevent adaptation? IFTTT?

I asked about this on Twitter and everyone has a strong opinion on the right way to do this, and every opinion is different. They tend to split into:

  1. Have people bother you constantly
  2. Slack (only works with buy in from others)
  3. Trello and Workflowy
  4. Something called GTD
  5. Put sticky notes everywhere
  6. Github
  7. Spreadsheets with extensive notes

I’m super curious if there is a better strategy for project management. Perhaps I am not using slack correctly? Suggestions?

#Cosyne17, by the numbers (updated)

cosyne2017_thisyear

Cosyne is the Computational and Systems Neuroscience conference held every year in Salt Lake City (though – hold the presses – it is moving to Denver in 2018). It’s status as the keystone Computational and Systems Neuro conference makes it a fairly good representation of what the direction of the field is. Since I like data, here is this year’s Cosyne data dump.

First is who is the most active – and this year it is Jonathan Pillow who I dub this year’s Hierarch of Cosyne. The most active in previous years are:

  • 2004: L. Abbott/M. Meister
  • 2005: A. Zador
  • 2006: P. Dayan
  • 2007: L. Paninski
  • 2008: L. Paninski
  • 2009: J. Victor
  • 2010: A. Zador
  • 2011: L. Paninski
  • 2012: E. Simoncelli
  • 2013: J. Pillow/L. Abbott/L. Paninski
  • 2014: W. Gerstner
  • 2015: C. Brody
  • 2016: X. Wang

cosyne2017_allposters

If you look at the total number of posters across all of Cosyne’s history, Liam Paninski is and probably always will be the leader. Evidently he was so prolific in the early years that they had to institute a new rule to nerf him like some overpowered video game character.

Visualizing the network diagram of co-authors also reveals a lot of structure in the conference (click for PDF):

cosyne2017_network

And the network for the whole conference’s history is a dense mess with a soft and chewy center dominated by – you guessed it – the Paninski Posse (I am clustered into Sejnowski and Friends from my years at Salk).

cosyne2004-2017_network

People on twitter have seemed pretty excited about this data, so I will update this later with a link to a github repository.

Speaking of twitter, it is substantially more active than it has been in the past. Neuroscience Twitter keeps growing and is a great place to learn about new ideas in the field. Here is a feed of everyone that is attending that is on Twitter. Let me know if you want me to add you.

There are two events you should consider attending if you are at Cosyne: the Simons Foundation is hosting a social on Friday evening and on Saturday night there is a Hyperbolic Cosyne Party which you should RSVP to right away…!

On a personal note, I am giving a poster on the first night (I-49) and am co-organizing a workshop on Automated Methods for High-Dimensional Analysis. I hope to see you all there!

Previous years: [2014, 2015, 2016]

 

Update – I analyzed a few more things based on new data…

cosyne-institutions-2017

I was curious which institution had the most abstracts (measured by the presenting author’s institution.) Then I realized I had last year’s data:

cosyne-institutions-2016

Somehow I had not fully realized NYU was so dominant at this conference.

I also looked at which words are most enriched in accepted Cosyne abstracts:acceptedwords

Ilana said that she sees: behavior. What is enriched in rejected abstracts? Oscillations, apparently (this is a big topic of conversation so far) 😦rejectedwords

Finally, I clustered the most common words that co-occur in abstracts. The clusters?

  1. Modeling/population/activity (purple)
  2. information/sensory/task/dynamics (orange)
  3. visual/cortex/stimuli/responses (puke green)
  4. network/function (bright blue)
  5. models/using/data (pine green)

abstract-word-co-occurrence

Donald Trump is attacking the very foundations of science (cross-posted from Medium)

The beautiful thing about science is that it works. It doesn’t care if you are a Democrat or a Republican; an atheist, Christian or Muslim; rich or poor. It works. It has consistently provided the tools necessary to improve everyone’s lives. Whether that is to cure disease, to produce the computer or phone you are reading this on, or to heat your home, science works. There have always been two key to foundations that science is built on: scientific data and people. Donald Trump is attacking both of these.

Although we are lumped into categories — biologist, physicist, ecologist — there are very few real silos between fields. Science is a chaotic, swirling mess of ideas that get passed around as we attempt to explain the world. I am a neuroscientist. But, more importantly, I am a scientist. In my field, some of the most influential tools have come from studying how jellyfish glow in the dark and how bacteria survive in salty environments. We take ideas about how magnets align with each other and use them to explain how masses of brain cells are able to work together to perceive the world. I read papers from physicists, from computer scientists, from ecologists and apply this directly to problems of how brains are able to make decisions and communicate with each other.

At the heart of all this is data. Data is not Democrat or Republican, liberal or conservative. Data is. When scientists hear that the Environmental Protection Agency (EPA) is stricken from communicating, that data and studies must be approved by political appointees — no, quick, walk that back, data on the EPA website is being reviewed by political appointees — we don’t hear a focused attack on climate change. We hear an attack on the fundamental basis of science. The EPA does not simply research climate change, but funds research on health, on ecosystems, on chemistry, and more. When I scroll through the list of research I see many studies that could help neuroscience and medicine. But how would I know what to trust, what data has been allowed and what has not? An attack on the EPA’s ability to produce data is an attack on all of science.

The more insidious attack on science is on its people. Trump recently announced that visas from certain countries would not be renewable. One of these countries is Iran, one of the largest producers of scientific minds in the world. And they come to the USA! And want to live here and contribute to the scientific enterprise. Because we really do recruit the best minds, and they get here and they want to stay.

There is an important story as to how America became the scientific powerhouse that it is, especially in Physics. Prior to World War 2, the language of science was a mix of French, German and English. But as it became clear that more and more people were unacceptable in Europe, some of the greatest physicists in the world moved to America. Einstein, Bohr, Fermi and Pauli. And after the war, more and more scientists poured into America: Wernher von Braun led the team that launched America’s first satellite and America’s moon landings. And so, because America took in the best scientists in the world, America became the biggest and best producer of science in the world. And it continues that dominance because this is where the best research is done and it is where people want to be.

But these days other countries do great science, too. What happens to the Iranians who want to come to the US to do a PhD? They can’t anymore. What happens to the Iranians already in the US who wanted to stay here and build their lives here? They left Iran for a reason. But do they want to stay in America anymore? I cannot count the number of times I have already heard from my Iranian friends, “I should have taken the job in Europe.” And it is not just them. Everyone who has a visa is worried about the new fickleness of the system. Who knows who is next?

One thing is clear: Donald Trump is attacking American science. Donald Trump is attacking the very foundations that science in this country is built on. He is not attacking faceless enemies, he is attacking our very real friends and colleagues. These attacks are so bad that even the most introverted scientists are gearing up to march. This is not about Republicans or Democrats. This is about Donald Trump’s war on science.

(Cross-posted from Medium)

tl;dr bullet points:

1. There are two key foundations that science is built on: scientific data and people. Donald Trump is attacking both of these

2. Science is about data. Data is not Democrat or Republican, liberal or conservative.

3. The attack on the EPA is an attack on all science. Data collected in every field is used by a huge number of OTHER fields

4. The EPA does not simply fund climate change, but also funds research on health, on ecosystems, on chemistry, and more

5. The more insidious attack on science is on its people

6. Remember that the reason America is a scientific powerhouse is because all the best researchers in the world wanted to come here during and after WW2

7. Number of times I have already heard great Iranian scientists in the US say “I should have gone to Europe” is saddening

8. The halting of all visas to Iran etc sends a message to ALL foreign scientists who might otherwise come here

9. This is great for Europe and China, terrible for USA

10. Donald Trump is not attacking faceless enemies, he is attacking our friends and colleagues

11. As both an American and a scientist, I am so, so angry at what he is doing: attacking the very foundations of science in this country

12. You know things are bad when even the most introverted scientists want to march! When was the last time THAT happened?