Interesting neuro/ML discussions on twitter, 1/9/19

It seems like it might be useful to catalogue the interesting twitter threads that pop up from time to time. They can be hard to parse and easy to miss but there is a lot of interesting and useful stuff. I am going to focus on *scientific result*-related threads. I don’t know if this will be useful – consider it an experiment. Click on the tweets to read more of the threads.

(Click below the fold)

  • Everyone seems to be on Twitter these days, but it turns out not everyone really is. I won’t link to it but there was a paper out last week in Science that rubbed some people the wrong way. The results were not really in my wheelhouse, but I tend to be (probably naively) optimistic that there is a decent reason that something gets published. Without the authors around to push back the discussions spiraled in a negative direction. This is why you need to be on Twitter! This is where a lot of discussion is happening and something that a lot of people are actively paying attention to. If you are not there, you do not understand the discussion and you can not contribute to it.
  • Why has the idea of low-dimensional neural activity been useful?

Some answers:

  • Next is this sprawling thread that starts off by discussing how the way we talk about decision-making is quite biased and then veers off into the direction of “is talking about optimal behavior even useful?”

A lot of people are not happy with the connotations that go along with the terminology ‘optimal’ and ‘rational’:

Is internal ‘value’ a useful concept?

Are value functions even meaningful (if they are constantly fluctuating in state- and context-based ways)?

Why is Knightian Uncertainty (unknown unknowns) ignored in decision-making?

Dan Yamins discusses his work training Deep Networks that end up looking similar to biological networks, and how that relates to the discussion. The threading on these got broken so I’m going to post a few of them

  • Also seen on twitter is this excellent discussion on whether LFP is “epiphenomenal” or not. I really like this point:
    You could argue that spikes are “encoding”, in that they are literally the messages transmitted between neurons, such that they encode and communicate the result of some computation. But spikes do not compute! The cells “compute”, dendrites “compute”, the axon hillock “computes”. In that sense, spikes are epiphenomenal: they are the secondary consequences of dendritic computation, of which you can fully infer by knowing the incoming synaptic inputs and biophysical properties of the neuron. Spikes are the exhaust fumes of synaptic integration.

Leave a comment