Controversial topics in Neuroscience

Twitter thread here.

  • Do mice use vision much?
    • They have pretty crappy eyesight and their primary mode of exploration seems to be olfactory/whisker-based
  • How much is mouse cortex like primate cortex?
    • Mouse cortex is claimed to be more multimodal than primate cortex which is more specialized
  • “The brain does deep learning”
    • Deep learning units aren’t exactly like neurons, plus we resent the hype that they have been getting
  • Is there postnatal neurogenesis? Is it real or behaviorally relevant?
    • See recent paper saying it doesn’t exist in humans
  • Brain imaging
    • Does what we see correlate with neural activity? Are we able to correct for multiple comparisons correctly? Does anyone actually correct for multiple comparisons correctly?
  • Bayesian brain
    • Do people actually use their priors? Does the brain represent distributions? etc
  • Konrad Kording
    • Can neuroscientists understand a microprocessor? Is reverse correlation irrelevant?
  • Do mice have a PFC
    • It’s so small!
  • STDP: does it actually exist?
    • Not clear that neurons in the brain actually use STDP – often looks like they don’t. Same with LTP/LTD!
  • How useful are connectomes
    • About as useful as a tangled ball of yarn?
  • LTP is the storage mechanism for memories
    • Maybe it’s all stored in the extracellular space, or the neural dynamics, or something else.
  • Are purely descriptive studies okay or should we always search for mechanism
    • Who cares about things that you can see?!

Updates

  • Does dopamine have ‘a role’?
    • Should we try to claim some unified goal for dopamine, or is it just a molecule with many different downstream effects depending on the particular situation?
  • Do oscillations (‘alpha waves’, ‘gamma waves’, etc) do anything?
    • Are they just epiphenomenon that are correlated with stuff? Or do they actually have a causative role?
Advertisements

Communication by virus

‘Some half-baked conceptual thoughts about neuroscience’ alert

In the book Snow Crash, Neil Stephenson explores a future world that is being infected by a kind of language virus. Words and ideas have power beyond their basic physical form: they have the ability to cause people to do things. They can infect you, like a song that you just can’t get out of your head. They can make you transmit them to other people. And the book supposes a language so primal and powerful it can completely and totally take you over.

Obviously that is just fiction. But communication in the biological world is complicated! It is not only about transmitting information but also about convincing them of something. Humans communicate by language and by gesture. Animals sing and hiss and hoot. Bacteria communicate by sending signaling molecules to each other. Often these signals are not just to let someone know something but also to persuade them to do something else. Buy my book, a person says; stay away from me, I’m dangerous, the rattlesnake says; come over here and help me scoop up some nutrients, a bacteria signals.

And each of these organisms are made up of smaller things also communicating with each other. Animals have brains made up of neurons and glia and other meat, and these cells talk to each other. Neurons send chemicals across synapses to signal that they have gotten some information, processed it, and just so you know here is what it computed. The signals it sends aren’t always simple. They can be exciting to another neuron or inhibiting, a kind of integrating set of pluses and minuses for the other neuron to work on. But they can also be peptides and hormones that, in the right set of other neurons, will set new machinery to work, machinery that fundamentally changes how the neuron computes. In all of these scenarios, the neuron that receives the signal has some sort of receiving protein – a receptor – that is specially designed to detect those signaling molecules.

This being biology, it turns out that the story is even more complicated than we thought. Neurons are cells and just like every other cell it has internal machinery that uses mRNAs to provide instructions for building the protein machinery needed to operate. If you need more of one thing, the neuron will synthesize more of the mRNA and transcribe it into new proteins. Roughly, the more mRNA you have the more of that protein – tiny little machines that live inside the cell – you will produce.

This synthesis and transcription is behind much of how neurons learn. The saying goes that the neurons that fire together wire together, so that when they respond to things at the same time (such as being in one location at the same time you feel sad) they will tend to strengthen the link between them to create memories. And the physical manifestation of this is transcribing proteins for a specific receptor (say) so that now the same signal will activate more receptors and result in a stronger link.

And that was pretty much the story so far. But it turns out that there is a new wrinkle to this story: neurons can directly ship mRNAs into each other in a virus-like fashion, avoiding the need for receptors altogether. There is a gene called Arc which is involved in many different pieces of the plasticity puzzle. Looking at the sequence of the gene, it turns out that there is a portion of the code that creates a virus-like structure that can encapsulate RNAs and bury through other cells’ walls. This RNA is then released into the other cell. And this mechanism works. This Arc-mediated signaling actually causes strengthening of synapses.

Who would have believed this? That the building blocks for little machines are being sent directly into another cell? If classic synaptic transmission is kind of like two cells talking, this is like just stuffing someone else’s face with food or drugs. This isn’t in the standard repertoire of how we think about communication; this is more like an intentional mind-virus.

There is this story in science about how the egg was traditionally perceived to be a passive receiver during fertilization. In reality, eggs are able to actively choose which sperm they accept – they have a choice!

The standard way to think about neurons is somewhat passive. Yes, they can excite or inhibit the neurons they communicate with but, at the end of the day, they are passively relaying whatever information they contain. This is true not only in biological neurons but also in artificial neural networks. The neuron at the other end of the system is free to do whatever it wants with that information. Perhaps a reconceptualization is in order. Are neurons more active at persuasion than we had thought before? Not just a selfish gene but selfish information from selfish neurons? Each neuron, less interested in maintaining its own information than in maintaining – directly or homeostatically – properties of the whole network? Neurons do not simply passively transmit information: they attempt to actively guide it.

2017 in review (a quantified life)

I have always found it useful to take advantage of the New Year and reflect on what I have done over the past year. The day itself is a useful bookmark in life, inevitably trapped between leaving town for Christmas and coming back to town after the New Year begins. Because of the enforced downtime, what I happen to read has a strong influence on me – last year, I hopped on the Marie Kondo craze and really did manage to do a better job of keeping clean (kind of) but more importantly organizing my clothes by rolling and folding them until the fit so perfectly in my drawers. So that was useful, I guess.

The last year has been okay. Not great, not terrible. Kind of middle-of-the-road as my life goes. There have been some big wins (organizing a fantastic workshop at Cosyne on neurobehavioral analysis and being awarded a Simons Foundation fellowship that lets me join a fantastic group of scientists) and some frustrations (mostly scientific work that goes slowly slowly slowly).

One thing that sticks out for me over this past year – over these past two years, actually – is how little time I have spent on this blog. Or rather, how little of what I have done has been published on this blog. It’s not for a lack of time! I have actually done a fair bit of writing but am constantly stuck after a paragraph or two, my motivation waning until it disappears completely. This largely due to how I responded to some structural features in my life, mostly a long commute and a lot of things that I want to accomplish.

Last year I had the “clever” idea of creating a strict regimen of hour by hour and daily goals both for work and for my life. Do this analysis from 3pm – 4pm. Debug that code from 4pm – 5pm. Play the piano from 8pm – 9pm. Things like that. Maybe this works for other people? But I end up overambitious, constantly adding things that I need to do today so much that I rapidly switch from project to project, each slot mangled into nonsense by the little new things that will always spring up on any given day. Micromanaging yourself is the worst kind of managing, especially when you don’t realize you are doing it.

This is where what I read over winter break made me think. One of the three articles that influenced me was about the nature of work:

For unlike someone devoted to the life of contemplation, a total worker takes herself to be primordially an agent standing before the world, which is construed as an endless set of tasks extending into the indeterminate future. Following this taskification of the world, she sees time as a scarce resource to be used prudently, is always concerned with what is to be done, and is often anxious both about whether this is the right thing to do now and about there always being more to do. Crucially, the attitude of the total worker is not grasped best in cases of overwork, but rather in the everyday way in which he is single-mindedly focused on tasks to be completed, with productivity, effectiveness and efficiency to be enhanced. How? Through the modes of effective planning, skilful prioritising and timely delegation. The total worker, in brief, is a figure of ceaseless, tensed, busied activity: a figure, whose main affliction is a deep existential restlessness fixated on producing the useful.

Yup, that pretty much sums up how I was trying to organize my life. In the hope of accomplishing more I ended up doing less. This year I am trying a less-is-more approach; have fewer, more achievable goals each day/month/time unit; have more unstructured time; read more widely; and so on. Instead of saying I need to learn piano and I need to make art and I need to play with arduinos and I need to memorize more poetry and finding more and more things that I need to do, just list some things I’m interested in doing. Look at that list every so often to remind myself and then allow myself to flow into the things I am most interested in rather than forcing it.

I was lucky enough in graduate school to join a lunch with Eve Marder. There are two types of scientists, she said. Starters and finishers. Some people start a lot of projects, some people finish a few. This has always stuck with me. This past year I have been trying to maximize how many things I can work on – and it turns out that is a lot of different things – I want to spend this year doing a couple things at a time and finish themDo them well.

I have this memory of Wittgenstein declaring in the Tractatus that “the purpose of the Philosopher is to clarify.” I must have confabulated that quote because I could never find it again. Still, it’s my favorite thing that Wittgenstein ever said. For a scientist, the aphorism should be that “the purpose of the Scientist is to simplify.”

There was an article in the New York Times recently from an 88-year-old man looking back on the 18 years he has lived in the millennium:

I’m trying to break other habits in far more conventional ways. As in many long marriages, my wife and I enjoy spending time with the same friends, watch the same television programs, favor the same restaurants, schedule vacations to many of the same places, avoid activities that venture too far from the familiar.

We decided to become more adventurous, shedding some of those habits. European friends of ours always seem to find the time for an afternoon coffee or glass of wine, something we never did. Now, spontaneously, one of us will suggest going to a coffee shop or cafe just to talk, and we do. It’s hardly a lifestyle revolution, but it does encourage us to examine everything we do automatically, and brings some freshness to a marriage that started when Dwight Eisenhower was elected president.

The best memories can come from unexpected experiences. The best thoughts can come from exposure to unexpected ideas. Attempting to radically organize my life has left me without those little moments where my mind wanders from topic to topic. Efficiency. I have cut back on my reading for pleasure, most of which now comes on audiobook during my commute and somehow seems to prevent deep thinking. But the reason I am interested in science in the first place is because of the questions about who we are and how we behave that come out of thinking about the things I read! The solution, again, is to remove some of the structure I am imposing on my life, simplify and force myself to let go of the need to always be doing something quantifiable and useful.

Looping back, this is where the importance of sitting down and writing, and finishing writing, is one of my big goals for the year. Because I find writing fun! And I find it the best way to really think rigorously, to explore new thoughts and new ideas. There is much less of a need to do so much, to try so many projects when I can read and think about something, writing about it to make something useful and enjoyable instead of making a huge product out of it.

I am not a Stoic but find Stoic thinking useful. Something I read over the holidays:

Let me then introduce you to three fundamental ideas of Stoicism – one theoretical, the other two practical – to explain why I’ve become what I call a secular Stoic. To begin with, the Stoics – a school of philosophers who flourished in the Greek and Roman worlds for several hundred years from the third century BCE – thought that, in order to figure out how to live our lives (what they called ethics), we need to study two other topics: physics and logic. “Physics” meant an understanding of the world, as best as human beings can grasp it, which is done by way of all the natural sciences as well as by metaphysics.

The reason that physics is considered so important is that attempting to live while adopting grossly incorrect notions about how the world works is a recipe for disaster. “Logic” meant not only formal reasoning, but also what we would today call cognitive science: if we don’t know how to use our mind correctly, including an awareness of its pitfalls, then we are not going to be in a position to live a good life.

Beyond reading and self-reflection, the best way to understand your life is to quantify it. Quantification is the best way to peer into the past and really cut through hazy memories that are full of holes. What did I really do? What did I really think? This isn’t an attempt at stricture or rigidity: it’s an attempt at radical self-knowledge. I’m a fairly active at journaling, which is the first step, but I also keep track of what I eat and how I exercise using MyFitnessPal, books I read on Goodreads, movies I watch on letterboxd, where I have been using my phone to track me, and science articles I read using Evernote (I used to be very active on yelp but somehow lost track of that). Using these tools to look back on the past year is a great experience: “Oh yeah, I loved that movie!” or “Ugh I can’t believe I read that whole book.” or just reminding myself of pleasant memories from a short trip to LA.

I’d like to expand that this year to include some other relevant data – ‘skills’ I work on like playing piano to see whether I’m actually improving, TV I watch (because maybe I watch too much, or not enough!), what music I’m listening to, where I spend my money (I already avidly keep track of the fluctuations in how much I have month-to-month), and what important experiences I have (vacations; hikes; seeing exciting new art). There don’t seem to be any good apps for these things outside of Mint, so I have assembled a giant Google Sheet for all of these categories to make it easier to access and analyze the data, with a main Sheet that I can use every month to look back and make some qualitative observations. Oh, and I’m also building a bunch of arduinos that can sense temperature, humidity, light, and sound intensity to put in different rooms of my house to log those things (mostly because my house is always either too hot or too cold and the thermostat is meaningless and I want to figure out why, and partly because I want to make sweet visualizations of the activity in my house throughout the year).

So my lists!

These are the movies I watched in 2017 and to which I gave 5 stars (no particular order):

Embrace of the Serpent
American Honey
Victoria
Gentleman’s Agreement
T2: Trainspotting
Logan
Moana
While We’re Young
Moonlight

With honorable mentions to My Life As A Zucchini, Blade Runner 2049, and Singles.

These are the books I read in 2017 and liked the most:

The Invisibility Cloak (Ge Fei)
The Wind-up Bird Chronicle (Murakami)
Ficciones (Borges)
The Stars Are Legion (Hurley)
Redshirts (Scalzi)
Red Mars (Robinson)
We Are Legion (Taylor)
Permutation City (Egan)
Neuromancer (Gibson)

I see a lot more scifi than I normally read, and many books that I have read previously.

Where was I (generated using this)?

There was an article a few years ago on the predictability of human movement. It turns out that people are pretty predictable! If you know where they are at one moment, you can guess where they will be the next. That’s not too surprising, though, is it? You’re mostly at work or at home. If you go to a bar, there is a higher than random probability that you’ll go home afterward.

The data that you can ask your Android phone to collect on you is unfortunately a bit impoverished. It doesn’t log everything you do but is biased toward times when you check your phone (lunch, when you’re the passenger in a long car ride home, etc). Still, it captures the broad features of the day.

I’ve been keeping track of the data for two years now so I downloaded the data and did a quick analysis about the entropy of my own life. How predictable is my location? If you bin the data into 1 sq. mile bins, entropy is a measure of how much uncertainty there is in where I was. 1 bit of entropy would mean you could guess where I was down to the mile with only one yes or no question; 2 bits of entropy would mean you could guess with two questions; and so on.

On any given day of the week, there are roughly 3 bits of entropy in my location (much less on weekends). But as you can see, it varies a lot by month depending on whether I am traveling or not.

In 2016 (the weird first month is because that’s when I started collecting data and only got a few days):

In 2017:

I will leave you with an image from the last thing I was reading in 2017, and which was consistently the weirdest thing I read: Battle Angel Alita.

Studying the brain at the mesoscale

It i snot entirely clear that we are going about studying the brain in the right way. Zachary Mainen, Michael Häusser and Alexandre Pouget have an alternative to our current focus on (relatively) small groups of researchers focusing on their own idiosyncratic questions:

We propose an alternative strategy: grass-roots collaborations involving researchers who may be distributed around the globe, but who are already working on the same problems. Such self-motivated groups could start small and expand gradually over time. But they would essentially be built from the ground up, with those involved encouraged to follow their own shared interests rather than responding to the strictures of funding sources or external directives…

Some sceptics point to the teething problems of existing brain initiatives as evidence that neuroscience lacks well-defined objectives, unlike high-energy physics, mathematics, astronomy or genetics. In our view, brain science, especially systems neuroscience (which tries to link the activity of sets of neurons to behaviour) does not want for bold, concrete goals. Yet large-scale initiatives have tended to set objectives that are too vague and not realistic, even on a ten-year timescale.

Here are the concrete steps they suggest in order to from a successful ‘mesoscale’ project:

  1. Focus on a single brain function.
  2. Combine experimentalists and theorists.
  3. Standardize tools and methods.
  4. Share data.
  5. Assign credit in new ways.

Obviously, I am comfortable living on the internet a little more than the average person. But with the tools that are starting to proliferate for collaborations – Slack, github, and Skype being the most frequently used right now – there is really very little reason for collaborations to extend beyond neighboring labs.

The real difficulties are two-fold. First, you must actually meet your collaborators at some point! Generating new ideas for a collaboration rarely happens without the kind of spontaneous discussions that arise when physically meeting people. When communities are physically spread out or do not meet in a single location, this can happen less than you would want. If nothing else, this proposal seems like a call for attending more conferences!

Second is the ad-hoc way data is collected. Calls for standardized datasets have been around about as long as there has been science to collaborate on and it does not seem like the problem is being solved any time soon. And even when datasets have been standardized, the questions that they had been used for may be too specific to be of much utility to even closely-related researchers. This is why I left the realm of pure theory and became an experimentalist as well. Theorists are rarely able to convince experimentalists to take the time out of their experiments to test some wild new theory.

But these mesoscale projects really are the future. They are a way for scientists to be more than the sum of their parts, and to be part of an exciting community that is larger than one or two labs! Perhaps a solid step in this direction would be to utilize the tools that are available to initiate conversations within the community. Twitter does this a little, but where are the foraging Slack chats? Or amygdala, PFC, or evidence-accumulation communities?

Alert: empirical parasites are taking advantage of data scientists

The aerial view of the concept of collecting data is beautiful. What could be better than high-quality information carefully examined to give a p-value less than .05? The potential for leveraging these results for narrow papers in high-profile journals, never to be checked except by other independent studies costing thousands – tens of thousands – is a moral imperative to honor those who put the time and effort into collecting that data.

However, many of us who have actually performed data analyses, managed large data sets and analyses, and curated data sets have concerns about the details. The first concern is that someone who is not regularly involved in the analysis of data may not understand the choices involved in statistical testing. Special problems arise if data are to be combined from independent experiments and considered comparable. How heterogeneous were the study populations? Does the underlying data fulfill the assumptions for each test? Can it be assumed that the differences found are due to chance or improper correction for complex features of the data set?

A second concern held by some is that a new class of research person will emerge – people who have very little mathematical and computational training but analyze data for their own ends, possibly stealing from the research productivity of those who have invested much of their career in these very areas, or even to use the data to try to prove what the original investigators had posited before data collection! There is concern among some front-line researchers that the system will be taken over by what some researcher have characterized as “empirical parasites”.

Wait wait, sorry, that was an incredibly stupid argument. I don’t know how I could have even come up with something like that… It’s probably something more like this:

A second concern held by some is that a new class of research person will emerge — people who had nothing to do with the design and execution of the study but use another group’s data for their own ends, possibly stealing from the research productivity planned by the data gatherers, or even use the data to try to disprove what the original investigators had posited. There is concern among some front-line researchers that the system will be taken over by what some researchers have characterized as “research parasites.”

Yes, that’s it, open science could lead the way to research parasites analyzing other people’s data. I now look forward to the many other subtle insights on science that the editors of NEJM have to say.

This man has neuroscientists FURIOUS!

I just returned from a vacation in London and Scotland, which is a fantastic way to clear your mind. Then I returned and asked people, what did I miss? Apparently this which instantly clouded my mind back up.

Here’s a suggestion: never trust an article called “Here’s Why Most Neuroscientists Are Wrong About the Brain”. Especially when the first sentence is flat-out wrong:

Most neuroscientists believe that the brain learns by rewiring itself—by changing the strength of connections between brain cells, or neurons

Okay maybe not wrong – that is one, though not the only, way that neuroscientists think the brain learns – but certainly misleading in the extreme. What are some of the non-synaptic mechanisms that contribute to plasticity?

  1. Homeostatic plasticity/intrinsic excitability
  2. eg. dopamine-related (all sorts of transcription changes)
  3. Developmental plasticity
  4. Attractor states (we can quibble about whether this counts)
  5. Neurogenesis
  6. Dendritic excitability

That’s what I can come up with off top of my head; I am sure that there are more. I’ll just focus on one for a second – because I think it is pretty cool – the intrinsic excitability of a neuron. Neurons maintain a balance of hyperpolarizing and depolarizing ion channels in order to control how often they fire, as well as how responsive they are to input in general. Now a simple experiment is to simply block the ability of a neuron to fire for, say, a day. When you remove the blockade, the cell will now be firing much more rapidly. It is pretty easy to imagine that all sorts of things can happen to a network when a cell fundamentally changes how it is firing action potentials. [For more, I always think of Gina Turrigiano in connection to this literature.]

 

I also wonder about this quote in the article:

Neuroscientists have not come to terms with this truth. I have repeatedly asked roomfuls of my colleagues, first, whether they believe that the brain stores information by changing synaptic connections—they all say, yes—and then how the brain might store a number in an altered pattern of synaptic connections. They are stumped, or refuse to answer.

Perhaps he has was unclear when asking the question because this is a solved problem. Here is a whole chapter on encoding numbers with ‘synaptic’ weights. Is this how the brain does it? Probably not. But it is trivially easy to train a classic neural network to store numbers.

I do not mean to harp on the author of this piece, but these are interesting questions that he is raising! I love molecules. Neuroscience often enjoys forgetting about them for simplicity. But it is important that neuroscientists are clear on what we already know – and what we actually do not.

[note: sorry, I meant to post this late last week but my auto-posting got screwed up. I’m going to blame this on jet lag…]

The living building

Buildings – even the most cement-filled – are organic; they change through interaction with the parasites that infest them (us, mostly). How often do architects consider this? Ask any scientist who moves into a new laboratory building and you’ll be met with eyerolls and exasperated stories. The new neuroscience institute that I work in is fantastic in many ways, but has some extremely puzzling features such as the need to repeatedly use an ID card to unlock almost every door in the lab. This is in contrast to my previous home of the Salk Institute which was a long open space separated only by clear glass allowing free movement and easy collaboration.

I mostly mention this because the video above – on How Buildings Learn – has a fantastic story at the beginning about MIT’s famous media lab:

I was at the Media Lab when it was brand new. In the three months I was there, the elevator caught fire, the revolving door kept breaking, every doorknob in the building had to be replaced, the automatic door-closer was stronger than people and had to be adjusted, and an untraceable stench of something horrible dead filled the lecture hall for months. This was normal.

In many research buildings, a central atrium serves to bring people together with open stairways, casual meeting areas, and a shared entrance where people meet daily. The Media Lab’s entrance cuts people off from each other; there are three widely separated entrances each huge and glassy; three scattered elevators; few stairs; and from nowhere can you see other humans in the five story space. Where people might be visible, they are carefully obscured by internal windows of smoked glass.

The Obama raise (for scientists)

My dad, being my dad, sent me an article claiming that Obama was about to change overtime rules so that more salaried workers will be covered. Would I get paid more? Psh, yeah right, I said. But then I looked a bit more closely and it wasn’t so clear. The new proposed rules state that anyone making under $50,400 $47,892 would be eligible for overtime (whereas previously the limit was a measly $24,000). That is: most postdocs will, technically, be eligible for overtime money if they work more than 40 hours per week.

So I decided to ask the Twitter hivemind and set off a predictable storm of WTF’s. The summary is: yes, it looks right now like postdocs will be eligible for overtime pay but there is a commentary period to propose exceptions to these rules (I don’t think graduate students will because they are “students”). No, no one thinks this will actually end up happening; somehow the NIH/NSF will make postdocs exempt from these rules (see a bit more here). Here are the full proposed rules. If you have opinions about these rules, please send in comments:

The Notice of Proposed Rulemaking (NPRM) published on July 6, 2015 in the Federal Register (80 FR 38515) and invited interested parties to submit written comments on the proposed rule atwww.regulations.gov on or before September 4, 2015. Only comments received during the comment period identified in the Federal Register published version of the NPRM will be considered part of the rulemaking record.

I was asked to do a storify of all the twitter responses but, uh, there were a lot of them and I wasn’t 100% paying attention. So here are some salient points:

  1. What are the job duties of a postdoc? Does going to a lecture count, or will that not count toward “work time” (if it does, do I get credit for reading a salient paper at home? At lab?)
  2. Is a fellow an employee, or are they different somehow? Is this technically a “salary”? This seems to be a point of confusion and came up repeatedly.
  3. calling PDs “trainees” while also claiming them as exempt “learned professionals” is a joke.’
  4. This may increase incentive to train PhDs and decrease incentive to hire postdocs (“For my lab, min PD cost would be $62k/yr, PhD cost $31k/yr all-in.”). Similarly, the influence may be most felt on small labs with less money, less on large “prestige” labs.

#1 is the most interesting question in general.

Functionally, if enforced at all (hmm), this would be functionally like a decrease in NIH/NSF/etc funding. But let’s face it, I think we can all agree that the most likely outcome here is an ‘exemption’ for postdocs and other scientists…

Edit: I meant to include this: currently in the overtime rules, there is a “learned professional” exemption that describes scientists – and is why they do not get overtime pay. In order to qualify for that exemption, there is some salary floor that they must make ($455 per week, or ~$23,660 per year). The new proposed rules will state:

In order to maintain the effectiveness of the salary level test, the Department proposes to set the standard salary level equal to the 40th percentile of earnings for full-time salaried workers ($921 per week, or $47,892 annually for a full-year worker, in 2013)

The NIH paylines are currently at $42,480 for a first year postdoc, increasing yearly, and passing this threshold in year 4. The fastest way to avoid overtime rules would be to simply bump up the first year salary to $47,893.

Is the idea that neurons perform ‘computations’ in any way meaningful?

I wrote this up two months ago and then forgot to post it. Since then, two different arguments about ‘computation’ have flared up on Twitter. For instance:

I figured that meant I should finally post it to help clarify some things. I will have more comments on the general question tomorrow.

Note that I am pasting twitter conversations into wordpress and hoping that it will convert it appropriately. If you read this via an RSS reader, it might be better to see the original page.

The word ‘computation’, when used to refer to neurons, has started to bother me. It often seems to be thrown out as a meaningless buzzword, as if using the word computation makes scientific research seem more technical and more interesting. Computation is interesting and important but most of the time it is used to mean ‘neurons do stuff’.

In The Future of the Brain (review here), Gary Marcus gives a nice encapsulation of what I am talking about:

“In my view progress has been stymied in part by an oft-repeated canard — that the brain is not “a computer” — and in part by too slavish a devotion to the one really good idea that neuroscience has had thus far, which is the idea that cascades of simple low level “feature detectors” tuned to perceptual properties, like difference in luminance and orientation, percolate upward in a hierarchy, toward progressively more abstract elements such as lines, letters, and faces.”

Which pretty much sums up how I feel: either brains aren’t computers, or they are computing stuff but let’s not really think about what we mean by computation too deeply, shall we?

So I asked about all this on twitter then I went to my Thanksgiving meal, forgot about it, and ended up getting a flood of discussion that I haven’t been able to digest until now:

(I will apologize to the participants for butchering this and reordering some things slightly for clarity. I hope I did not misrepresent anyone’s opinion.)

The question

Let’s first remember that the very term ‘computation’ is almost uselessly broad.

Neurons do compute stuff, but we don’t actually think about them like we do computers

Just because it ‘computes’, does that tell us anything worthwhile?

The idea helps distinguish them from properties of other cells

Perhaps we just mean a way of thinking about the problem

There are, after all, good examples in the literature of computation

We need to remember that there are plenty of journals that cover this: Neural Computation, Biological Cybernetics and PLoS Computational Biology.

I have always had a soft spot for this paper (how do we explain what computations a neuron is performing in the standard framework used in neuroscience?).

What do we mean when we say it?

Let’s be rigorous here: what should we mean?

A billiard ball can compute. A series of billiard balls can compute even better. But does “intent” matter?

Computation=information transformation

Alright, let’s be pragmatic here.

BAM!

Michael Hendricks hands me my next clickbait post on a silver platter.

Coming to a twitter/RSS feed near you in January 2015…

 

The bigger problem with throwing the word ‘computation’ around like margaritas at happy hour is it adds weight to

What do you think about machines that think?

The answers to the Edge annual question is up: what do you think about machines that think?

Here were some of my favorite answers:

George Church, Carlo Rovelli, Donald D. Hoffman, Melanie Swan, Scott Atran, Richard Thaler, Terrence J. Sejnowski, Alex (Sandy) Pentland, Ursula Martin (and also the winner of most lyrical answer), Sheizaf Rafaeli, David Christian, George Dyson, Douglas Rushkoff, Helen Fisher, Tomaso Poggio, Bruce Schneier

Here are answers I was not fond of (you’ll sense a theme here, and yes I am obnoxiously opinionated about this particular subject):

Arnold Trehub, Gerald Smallberg, Daniel L. Everett (mostly for mentioning Searle’s Chinese Room)

And I can’t tell if this one is brilliant or nuts.

I was surprised at how few women there were, so I made this chart of the respondents since 2010 (a rough count suggests that there were 16, 21, 24, 26, 34, and 13 from 2010-2015). It seems, uh, less than optimal.

Edge answers gender ratio

Anywho, here is a quick blurb about how I would have answered:

What do I think about machines that think?

I think that we can not understand what machines that think will be like. Look out at the living world and ask yourself how well you understand the motivations of the animals that reside in it. Sure, you can guess at the things most related to us – cats, dogs, mice – but even these can be obtuse in their thought patterns. But, as Nagel asked us, consider the bat: how well can you place yourself in the mind of a creature sees with its ears? Or ‘thinks’ in smells instead of sights?

It gets even harder when we consider animals further out. What do I think of how ants think? Of sea squirts that live and then eat their own brains, converting themselves into pseudo-plants? How do I comprehend their lives and their place in the world compared to ours?

In truth, humans have largely left the natural world that required us to interact with other animals. During the European Middle Ages, the agency of other animals was so implicit in their view of the world that trials would be held with lawyers defending the interests of sheep and chickens. Yes, chickens would be accused of murder and sheep of enticing men into lascivious acts. Now these moral agents are little more than machines – which says a lot about how we think of machines.

So machines that think? How will they think and what will it be like to live in an ecosystem with non-human moral agents again? I cannot answer the second question – although it is probably the more interesting one – but look at where machine intelligence is heading now. We already have teams of intelligences ferociously battling at the microsecond level to trade stocks, a kind of frothy match of wits lying beneath the surface of the stock market. We have roombas, wandering around our homes, content to get trapped behind your couch and clogged full of cat hair. We have vicious killers in virtual environments, killing your friends and enemies in Call of Duty and Starcraft.

How many of these machines will be distributed intelligences, how many overly-specialized, and how many general purpose? This is the question that will determine how I think about machines that think.