Studying the brain at the mesoscale

It i snot entirely clear that we are going about studying the brain in the right way. Zachary Mainen, Michael Häusser and Alexandre Pouget have an alternative to our current focus on (relatively) small groups of researchers focusing on their own idiosyncratic questions:

We propose an alternative strategy: grass-roots collaborations involving researchers who may be distributed around the globe, but who are already working on the same problems. Such self-motivated groups could start small and expand gradually over time. But they would essentially be built from the ground up, with those involved encouraged to follow their own shared interests rather than responding to the strictures of funding sources or external directives…

Some sceptics point to the teething problems of existing brain initiatives as evidence that neuroscience lacks well-defined objectives, unlike high-energy physics, mathematics, astronomy or genetics. In our view, brain science, especially systems neuroscience (which tries to link the activity of sets of neurons to behaviour) does not want for bold, concrete goals. Yet large-scale initiatives have tended to set objectives that are too vague and not realistic, even on a ten-year timescale.

Here are the concrete steps they suggest in order to from a successful ‘mesoscale’ project:

  1. Focus on a single brain function.
  2. Combine experimentalists and theorists.
  3. Standardize tools and methods.
  4. Share data.
  5. Assign credit in new ways.

Obviously, I am comfortable living on the internet a little more than the average person. But with the tools that are starting to proliferate for collaborations – Slack, github, and Skype being the most frequently used right now – there is really very little reason for collaborations to extend beyond neighboring labs.

The real difficulties are two-fold. First, you must actually meet your collaborators at some point! Generating new ideas for a collaboration rarely happens without the kind of spontaneous discussions that arise when physically meeting people. When communities are physically spread out or do not meet in a single location, this can happen less than you would want. If nothing else, this proposal seems like a call for attending more conferences!

Second is the ad-hoc way data is collected. Calls for standardized datasets have been around about as long as there has been science to collaborate on and it does not seem like the problem is being solved any time soon. And even when datasets have been standardized, the questions that they had been used for may be too specific to be of much utility to even closely-related researchers. This is why I left the realm of pure theory and became an experimentalist as well. Theorists are rarely able to convince experimentalists to take the time out of their experiments to test some wild new theory.

But these mesoscale projects really are the future. They are a way for scientists to be more than the sum of their parts, and to be part of an exciting community that is larger than one or two labs! Perhaps a solid step in this direction would be to utilize the tools that are available to initiate conversations within the community. Twitter does this a little, but where are the foraging Slack chats? Or amygdala, PFC, or evidence-accumulation communities?

Advertisements

Alert: empirical parasites are taking advantage of data scientists

The aerial view of the concept of collecting data is beautiful. What could be better than high-quality information carefully examined to give a p-value less than .05? The potential for leveraging these results for narrow papers in high-profile journals, never to be checked except by other independent studies costing thousands – tens of thousands – is a moral imperative to honor those who put the time and effort into collecting that data.

However, many of us who have actually performed data analyses, managed large data sets and analyses, and curated data sets have concerns about the details. The first concern is that someone who is not regularly involved in the analysis of data may not understand the choices involved in statistical testing. Special problems arise if data are to be combined from independent experiments and considered comparable. How heterogeneous were the study populations? Does the underlying data fulfill the assumptions for each test? Can it be assumed that the differences found are due to chance or improper correction for complex features of the data set?

A second concern held by some is that a new class of research person will emerge – people who have very little mathematical and computational training but analyze data for their own ends, possibly stealing from the research productivity of those who have invested much of their career in these very areas, or even to use the data to try to prove what the original investigators had posited before data collection! There is concern among some front-line researchers that the system will be taken over by what some researcher have characterized as “empirical parasites”.

Wait wait, sorry, that was an incredibly stupid argument. I don’t know how I could have even come up with something like that… It’s probably something more like this:

A second concern held by some is that a new class of research person will emerge — people who had nothing to do with the design and execution of the study but use another group’s data for their own ends, possibly stealing from the research productivity planned by the data gatherers, or even use the data to try to disprove what the original investigators had posited. There is concern among some front-line researchers that the system will be taken over by what some researchers have characterized as “research parasites.”

Yes, that’s it, open science could lead the way to research parasites analyzing other people’s data. I now look forward to the many other subtle insights on science that the editors of NEJM have to say.

This man has neuroscientists FURIOUS!

I just returned from a vacation in London and Scotland, which is a fantastic way to clear your mind. Then I returned and asked people, what did I miss? Apparently this which instantly clouded my mind back up.

Here’s a suggestion: never trust an article called “Here’s Why Most Neuroscientists Are Wrong About the Brain”. Especially when the first sentence is flat-out wrong:

Most neuroscientists believe that the brain learns by rewiring itself—by changing the strength of connections between brain cells, or neurons

Okay maybe not wrong – that is one, though not the only, way that neuroscientists think the brain learns – but certainly misleading in the extreme. What are some of the non-synaptic mechanisms that contribute to plasticity?

  1. Homeostatic plasticity/intrinsic excitability
  2. eg. dopamine-related (all sorts of transcription changes)
  3. Developmental plasticity
  4. Attractor states (we can quibble about whether this counts)
  5. Neurogenesis
  6. Dendritic excitability

That’s what I can come up with off top of my head; I am sure that there are more. I’ll just focus on one for a second – because I think it is pretty cool – the intrinsic excitability of a neuron. Neurons maintain a balance of hyperpolarizing and depolarizing ion channels in order to control how often they fire, as well as how responsive they are to input in general. Now a simple experiment is to simply block the ability of a neuron to fire for, say, a day. When you remove the blockade, the cell will now be firing much more rapidly. It is pretty easy to imagine that all sorts of things can happen to a network when a cell fundamentally changes how it is firing action potentials. [For more, I always think of Gina Turrigiano in connection to this literature.]

 

I also wonder about this quote in the article:

Neuroscientists have not come to terms with this truth. I have repeatedly asked roomfuls of my colleagues, first, whether they believe that the brain stores information by changing synaptic connections—they all say, yes—and then how the brain might store a number in an altered pattern of synaptic connections. They are stumped, or refuse to answer.

Perhaps he has was unclear when asking the question because this is a solved problem. Here is a whole chapter on encoding numbers with ‘synaptic’ weights. Is this how the brain does it? Probably not. But it is trivially easy to train a classic neural network to store numbers.

I do not mean to harp on the author of this piece, but these are interesting questions that he is raising! I love molecules. Neuroscience often enjoys forgetting about them for simplicity. But it is important that neuroscientists are clear on what we already know – and what we actually do not.

[note: sorry, I meant to post this late last week but my auto-posting got screwed up. I’m going to blame this on jet lag…]

The living building

Buildings – even the most cement-filled – are organic; they change through interaction with the parasites that infest them (us, mostly). How often do architects consider this? Ask any scientist who moves into a new laboratory building and you’ll be met with eyerolls and exasperated stories. The new neuroscience institute that I work in is fantastic in many ways, but has some extremely puzzling features such as the need to repeatedly use an ID card to unlock almost every door in the lab. This is in contrast to my previous home of the Salk Institute which was a long open space separated only by clear glass allowing free movement and easy collaboration.

I mostly mention this because the video above – on How Buildings Learn – has a fantastic story at the beginning about MIT’s famous media lab:

I was at the Media Lab when it was brand new. In the three months I was there, the elevator caught fire, the revolving door kept breaking, every doorknob in the building had to be replaced, the automatic door-closer was stronger than people and had to be adjusted, and an untraceable stench of something horrible dead filled the lecture hall for months. This was normal.

In many research buildings, a central atrium serves to bring people together with open stairways, casual meeting areas, and a shared entrance where people meet daily. The Media Lab’s entrance cuts people off from each other; there are three widely separated entrances each huge and glassy; three scattered elevators; few stairs; and from nowhere can you see other humans in the five story space. Where people might be visible, they are carefully obscured by internal windows of smoked glass.

The Obama raise (for scientists)

My dad, being my dad, sent me an article claiming that Obama was about to change overtime rules so that more salaried workers will be covered. Would I get paid more? Psh, yeah right, I said. But then I looked a bit more closely and it wasn’t so clear. The new proposed rules state that anyone making under $50,400 $47,892 would be eligible for overtime (whereas previously the limit was a measly $24,000). That is: most postdocs will, technically, be eligible for overtime money if they work more than 40 hours per week.

So I decided to ask the Twitter hivemind and set off a predictable storm of WTF’s. The summary is: yes, it looks right now like postdocs will be eligible for overtime pay but there is a commentary period to propose exceptions to these rules (I don’t think graduate students will because they are “students”). No, no one thinks this will actually end up happening; somehow the NIH/NSF will make postdocs exempt from these rules (see a bit more here). Here are the full proposed rules. If you have opinions about these rules, please send in comments:

The Notice of Proposed Rulemaking (NPRM) published on July 6, 2015 in the Federal Register (80 FR 38515) and invited interested parties to submit written comments on the proposed rule atwww.regulations.gov on or before September 4, 2015. Only comments received during the comment period identified in the Federal Register published version of the NPRM will be considered part of the rulemaking record.

I was asked to do a storify of all the twitter responses but, uh, there were a lot of them and I wasn’t 100% paying attention. So here are some salient points:

  1. What are the job duties of a postdoc? Does going to a lecture count, or will that not count toward “work time” (if it does, do I get credit for reading a salient paper at home? At lab?)
  2. Is a fellow an employee, or are they different somehow? Is this technically a “salary”? This seems to be a point of confusion and came up repeatedly.
  3. calling PDs “trainees” while also claiming them as exempt “learned professionals” is a joke.’
  4. This may increase incentive to train PhDs and decrease incentive to hire postdocs (“For my lab, min PD cost would be $62k/yr, PhD cost $31k/yr all-in.”). Similarly, the influence may be most felt on small labs with less money, less on large “prestige” labs.

#1 is the most interesting question in general.

Functionally, if enforced at all (hmm), this would be functionally like a decrease in NIH/NSF/etc funding. But let’s face it, I think we can all agree that the most likely outcome here is an ‘exemption’ for postdocs and other scientists…

Edit: I meant to include this: currently in the overtime rules, there is a “learned professional” exemption that describes scientists – and is why they do not get overtime pay. In order to qualify for that exemption, there is some salary floor that they must make ($455 per week, or ~$23,660 per year). The new proposed rules will state:

In order to maintain the effectiveness of the salary level test, the Department proposes to set the standard salary level equal to the 40th percentile of earnings for full-time salaried workers ($921 per week, or $47,892 annually for a full-year worker, in 2013)

The NIH paylines are currently at $42,480 for a first year postdoc, increasing yearly, and passing this threshold in year 4. The fastest way to avoid overtime rules would be to simply bump up the first year salary to $47,893.

Is the idea that neurons perform ‘computations’ in any way meaningful?

I wrote this up two months ago and then forgot to post it. Since then, two different arguments about ‘computation’ have flared up on Twitter. For instance:

I figured that meant I should finally post it to help clarify some things. I will have more comments on the general question tomorrow.

Note that I am pasting twitter conversations into wordpress and hoping that it will convert it appropriately. If you read this via an RSS reader, it might be better to see the original page.

The word ‘computation’, when used to refer to neurons, has started to bother me. It often seems to be thrown out as a meaningless buzzword, as if using the word computation makes scientific research seem more technical and more interesting. Computation is interesting and important but most of the time it is used to mean ‘neurons do stuff’.

In The Future of the Brain (review here), Gary Marcus gives a nice encapsulation of what I am talking about:

“In my view progress has been stymied in part by an oft-repeated canard — that the brain is not “a computer” — and in part by too slavish a devotion to the one really good idea that neuroscience has had thus far, which is the idea that cascades of simple low level “feature detectors” tuned to perceptual properties, like difference in luminance and orientation, percolate upward in a hierarchy, toward progressively more abstract elements such as lines, letters, and faces.”

Which pretty much sums up how I feel: either brains aren’t computers, or they are computing stuff but let’s not really think about what we mean by computation too deeply, shall we?

So I asked about all this on twitter then I went to my Thanksgiving meal, forgot about it, and ended up getting a flood of discussion that I haven’t been able to digest until now:

(I will apologize to the participants for butchering this and reordering some things slightly for clarity. I hope I did not misrepresent anyone’s opinion.)

The question

Let’s first remember that the very term ‘computation’ is almost uselessly broad.

Neurons do compute stuff, but we don’t actually think about them like we do computers

Just because it ‘computes’, does that tell us anything worthwhile?

The idea helps distinguish them from properties of other cells

Perhaps we just mean a way of thinking about the problem

There are, after all, good examples in the literature of computation

We need to remember that there are plenty of journals that cover this: Neural Computation, Biological Cybernetics and PLoS Computational Biology.

I have always had a soft spot for this paper (how do we explain what computations a neuron is performing in the standard framework used in neuroscience?).

What do we mean when we say it?

Let’s be rigorous here: what should we mean?

A billiard ball can compute. A series of billiard balls can compute even better. But does “intent” matter?

Computation=information transformation

Alright, let’s be pragmatic here.

BAM!

Michael Hendricks hands me my next clickbait post on a silver platter.

Coming to a twitter/RSS feed near you in January 2015…

 

The bigger problem with throwing the word ‘computation’ around like margaritas at happy hour is it adds weight to

What do you think about machines that think?

The answers to the Edge annual question is up: what do you think about machines that think?

Here were some of my favorite answers:

George Church, Carlo Rovelli, Donald D. Hoffman, Melanie Swan, Scott Atran, Richard Thaler, Terrence J. Sejnowski, Alex (Sandy) Pentland, Ursula Martin (and also the winner of most lyrical answer), Sheizaf Rafaeli, David Christian, George Dyson, Douglas Rushkoff, Helen Fisher, Tomaso Poggio, Bruce Schneier

Here are answers I was not fond of (you’ll sense a theme here, and yes I am obnoxiously opinionated about this particular subject):

Arnold Trehub, Gerald Smallberg, Daniel L. Everett (mostly for mentioning Searle’s Chinese Room)

And I can’t tell if this one is brilliant or nuts.

I was surprised at how few women there were, so I made this chart of the respondents since 2010 (a rough count suggests that there were 16, 21, 24, 26, 34, and 13 from 2010-2015). It seems, uh, less than optimal.

Edge answers gender ratio

Anywho, here is a quick blurb about how I would have answered:

What do I think about machines that think?

I think that we can not understand what machines that think will be like. Look out at the living world and ask yourself how well you understand the motivations of the animals that reside in it. Sure, you can guess at the things most related to us – cats, dogs, mice – but even these can be obtuse in their thought patterns. But, as Nagel asked us, consider the bat: how well can you place yourself in the mind of a creature sees with its ears? Or ‘thinks’ in smells instead of sights?

It gets even harder when we consider animals further out. What do I think of how ants think? Of sea squirts that live and then eat their own brains, converting themselves into pseudo-plants? How do I comprehend their lives and their place in the world compared to ours?

In truth, humans have largely left the natural world that required us to interact with other animals. During the European Middle Ages, the agency of other animals was so implicit in their view of the world that trials would be held with lawyers defending the interests of sheep and chickens. Yes, chickens would be accused of murder and sheep of enticing men into lascivious acts. Now these moral agents are little more than machines – which says a lot about how we think of machines.

So machines that think? How will they think and what will it be like to live in an ecosystem with non-human moral agents again? I cannot answer the second question – although it is probably the more interesting one – but look at where machine intelligence is heading now. We already have teams of intelligences ferociously battling at the microsecond level to trade stocks, a kind of frothy match of wits lying beneath the surface of the stock market. We have roombas, wandering around our homes, content to get trapped behind your couch and clogged full of cat hair. We have vicious killers in virtual environments, killing your friends and enemies in Call of Duty and Starcraft.

How many of these machines will be distributed intelligences, how many overly-specialized, and how many general purpose? This is the question that will determine how I think about machines that think.

Who is getting hired in neuroscience?

I am always a bit jealous by how organized the field of academic economics is when compared to, well, anyone else. To get an academic job, young economists put up their one “job paper” into some sort of database for prospective employers to evaluate (also, they do not do postdocs). This gives them a large dataset to analyze. fivethirtyeight has a nice analysis of what the people looking for an academic economics job are working on (there’s more in the link):

Neuroscience does not have an equivalent database, unfortunately. But I do run the neurorumblr, which aggregates neuroscience faculty job postings. They often break down what type of research they are looking for candidates to accomplish into broad categories. There are currently ~95 job postings: here is what they are looking for.

Neuroscience jobs
I was surprised by the number of computational positions; a large chunk of them are computational and cognitive which leads me to think they may be EEG/fMRI postings? I’m not sure.

Also, “cognitive” is the new “psychology”.

 

Why is reporting on health and science so bad? Because the reporters can’t do their jobs.

Imagine this scenario: a sports reporter is asked to cover an emerging conflict in the Middle East. The sports reporter, never particularly keen on international affairs, is on a deadline and looks to see what they can use. There’s in-person video of the central events in question, but our journalist friend doesn’t have the necessary background or context to fully understand what happened. Is there something else? A press release from the US government and from one side of the conflict in the Middle East? Sounds like our sportsman is good to go! Just copy and paste the exciting bits, add in the little bit of context that our intrepid soul already has, and bingo. News has been reported!

Later, it turns out that our poor reporter has been duped! The press release from the Middle East was nothing but PR, empty words of propaganda to make things seem more important and interesting than they really are! Our friend from the sports section sighs, wishing he had asked someone who knew about this kind of thing who would have known what to look out for.

In a similar vein, Vox has an article asking why so many articles on health (and, let’s admit it, science) are junk. The culprit is identified as clearly as in our example above: coverage by those who don’t know, or don’t care. See:

The researchers found that university press offices were a major source of overhype: over one-third of press releases contained either exaggerated claims of causation (when the study itself only suggested correlation), unwarranted implications about animal studies for people, or unfounded health advice.

…When a press release included actual health advice, 58 percent of the related news articles would do so too (even if the actual study did no such thing). When a press release confused correlation with causation, 81 percent of related news articles would. And when press releases made unwarranted inferences about animal studies, 86 percent of the journalistic coverage did, too.

…Unfortunately, however, this isn’t a perfect world. Many journalists are often covering science in the morning, and the courts in the afternoon. We are under heavy pressure to meet multiple deadlines every day, and some of us lack the time, sources, or background to properly vet the studies we’re reporting on.

So we rely on scientists and on press offices to guide us through research, even though, clearly, we shouldn’t.

Wait – what? The problem is the scientists and press offices? Because reporters are too overworked or unqualified to do their job properly? It sounds from the quote above that reporters are just parroting what a press release says without actually reading the source material. It sounds like reporters aren’t doing their jobs. But rather than accept the blame, they are trying to avoid the responsibility.

Unless I am mistaken, the job of a journalist is not to overlay press releases with a thin veneer of impartiality. Their job is to synthesize new information with their existing bank of expertise in order to convey to a naive audience what is or isn’t novel or important. Conversely, the job of a PR department – which derives from the incentive structure – is quite clearly to hype new research. Does anyone think that a press release from a corporation is written to be as truthful as possible, rather than putting as good of a spin on it as possible?

If the reporter knew enough about the field, they would be able to check whether or not the things they were writing were true. Where in the paper does it say this correlation exists? Is there an exaggeration? How much?

If they are unable to do that, what are they doing? Why should I read science or health journalism if they are unable to discern fact from fiction?

Monday open question: Which neuroscientists have most influenced your thinking?

With the release of the NIH BRAIN Initiative grants, it’s become clear that there’s a big disconnect between members of the different subfields: people working molecular neuroscience, cognitive neuroscience, systems neuroscience, etc. I’m just as bad as anyone else so I thought it may be useful to know who are the, say, five most important influences to their work?

For me, the names would have to be (in no particular order):

1. Eve Marder

2. Bill Bialek

3. Rachel Wilson

4. Krishna Shenoy/Mark Churchland

5. Cori Bargmann