Facts about color vision

From an article by Ed Yong (remember, rods ~ night vision, cones ~ color daytime vision):

In 1913, American zoologists Horatio H. Newman and J. Thomas Patterson wrote, “The eyes [of the nine-banded armadillo] are rudimentary and practically useless. If disturbed an armadillo will charge off in a straight line and is as apt to run into a tree trunk as to avoid it…”

A wide range of animals, including many birds, fish, reptiles, and amphibians, have eyes with four types of cones, allowing them to discriminate between a huge range of colours. Mammals, however, evolved from a nocturnal ancestor that had already lost two of its cones, and many have stuck with this impoverished set-up. Dogs, for example, still only have two cones: one tuned to violet-ish colours and another tuned to greenish-blue. (Contrary to popular belief, a dog’s world isn’t black-and-white; they see colours, albeit a limited palette.)

Humans and other primates partly reversed the ancient loss by reinventing a third red-sensitive cone, which may have helped us to discern unripe green fruits from ripe red/orange ones. Ocean-going mammals, meanwhile, took the loss of cones even further and disposed of their blue/violet-sensitive ones. And the great whales have lost all their cones entirely. They only have rods. The ocean is blue, but a blue whale would never know…there are even people who have rod-only vision—they do well in all but brilliant sunlight, and have sharp enough vision to read in normal light. (Then again, Emerling says that this condition is sometimes called “day blindness”, and that “it’s frequently painful for these individuals to keep their eyes open during the day.”)

There are tons of other great little facts about the vision of different animals in the articles.

Anki Drive and the coming rise of intelligent toys

So there is this new toy called the Anki Drive: basically old-fashioned scalextric (racing cars on a track) combined with an iPhone. It doesn’t sound that exciting at first – just lets you add little things like “shooting missiles” at the other cars to make it a bit more video game-esque.

But the real exciting thing? The iPhone also controls the other cars autonomously, giving each an aggressive, cooperative, etc personality.

Yes: this car gives children the gift of multiple competitive artificial intelligences with unique personalities, as if that is no big deal. Next up: drone friends? Commercialized artificial intelligence in, well,  everything is on its way.

Merry Christmas.

Cordelia Fine and Feminism in neuroscience

When I first started my PhD in neuroscience, a philosophically-inclined friend of mine started expounding on Feminist critiques of science. To most people, this would seem irrelevant to the science I was investigating: theory and modeling on a computer, before moving to hermaphroditic C. elegans. No females or males were being studied here! But the ideas are both insightful and important:

Emily Martin examines the metaphors used in science to support her claim that science reinforces socially constructed ideas about gender rather than objective views of nature. In her study about the fertilization process, for example, she asserts that classic metaphors of the strong dominant sperm racing to an idle egg are products of gendered stereotyping rather than portraying an objective truth about human fertilization. The notion that women are passive and men are active are socially constructed attributes of gender which, according to Martin, scientists have projected onto the events of fertilization and so obscuring the fact that eggs do play an active role.

Martin describes working with a team of sperm researchers at Johns Hopkins to illustrate how language in reproductive science adheres to social constructs of gender despite scientific evidence to the contrary: “even after having revealed…the egg to be a chemically active sperm catcher, even after discussing the egg’s role in tethering the sperm, the research team continued for another three years to describe the sperm’s role as actively penetrating the egg.”

Concepts are linked in our minds, consciously or not; the metaphors that we use matter (think of a Hopfield network). It would behoove all scientists to think deeply about Feminist critiques and their broader implications. The above example is canonical for a reason: the difference between two interacting agents (sperm, egg) with one decision-maker (sperm) is very different from that of two decision-makers (sperm and egg). But preconceived gender notions prevented us from noticing this simple fact!

Cordelia Fine is the most prominent scientist articulating these views in neuroscience today. This month, she has had two good interviews. If you take one big point away, it is that males and females may have different population means (though this interacts with social circumstances), but there is substantial population overlap. But humans like to see things in binary opposition so we either simply don’t recognize the amount of overlap that exists or blow up small differences.

One is with Uta Frith:

Cordelia: Happily, the perspectives are definitely not that polarized. One thing that’s worth stressing though is that criticisms of this area of research don’t stem from a belief that it’s intrinsically problematic to look at the effects of biological sex on the brain. But implicit assumptions about female/male differences in brain and behavior do influence research design and interpretation. They do this in ways that can give rise to misleading conclusions that additionally reinforce harmful gender stereotypes….

Cordelia: Yes, and long before the buzz about neuroplasticity, feminist neurobiologists were writing about this ‘entanglement’: the fact that the social phenomenon of gender (which systematically affects an individual’s psychological, physical, social and material experiences) is literally incorporated, shaping the brain and endocrine system. One of the recommendations of our article is for researchers to attempt to incorporate the principle of entanglement into their research models, including more and/or different categories of independent variables that include ways of capturing the role of the environment.

And with the Neurocritic at the PLoS Neuroscience community:

With regards to sample size, different implicit models of sex/gender and the brain will give rise to different intuitions or assumptions about what is an adequate sample size. According to implicit essentialist assumptions, there are there are distinctively different ‘male’ and ‘female’ brains. But non-human animal research has shown that biological sex interacts in complex ways with many different factors (hormones, stress, maternal care, and so on) to influence brain development. Because of the complexity and idiosyncrasy of these sex influences, this doesn’t give rise to distinctive female and male brains, but instead, heterogeneous mosaics of ‘female’ and ‘male’ (statistically defined) characteristics…

As for publication bias for positive findings, this has long been argued to be particularly acute when it comes to sex differences. It’s ubiquitous for the sex of participants to be collected and available, and the sexes may be routinely compared with only positive findings reported. As Anelis Kaiser and her colleagues have pointed out, this emphasis on differences over similarities is also institutionalized in databases, that only allow searches for sex/gender differences.

We are already cyborgs

You could see this as the impotence of separating our ‘selves’ from our choices and environment. You could also see this as how integrated technology already is into our bodies, even though we usually don’t realize it.

Biss says we are doubly bound: to nature and to technology, neither system we can either comprehend or reject completely. The cyborg scholar Chris Hables has written that many of us are “literally cyborgs, single creatures that include organic and inorganic subsystems.” The inorganic subsystem, Hables explains, is the “programming of the immune system that we call vaccination.” The vaccines are made by corporations, but corporations are made by people, and both the immune response and the antibodies it produces—to wit, the organic subsystems—are made by cells. Yet cells are so numerous, so automated that they resemble, in a way, corporate drones.

From an article on our complex, churning, learning immune system.

How do you do your science?

I spend too much time thinking about what the best way is to do science. How should I structure my experiments if I want to maximize the likelihood that what I discover is both true and useful to other people? And how many different experiments do I need to do? Especially as a theoroexperimentalist.

The background philosophy of science chatter has picked up a bit over the last week, and I was spurred by something said on Noahpinion:

I don’t see why we should insist that any theory be testable. After all, most of the things people are doing in math departments aren’t testable, and no one complains about those, do they? I don’t see why it should matter if people are doing math in a math department, a physics department, or an econ department.

Also (via Vince Buffalo)

As a mathematical discipline travels far from its empirical source, or still more, if it is a second and third generation only indirectly inspired from ideas coming from ‘reality’, it is beset with very grave dangers. It becomes more and more purely aestheticizing, more and more purely l’art pour l’art. This need not be bad, if the field is surrounded by correlated subjects, which still have closer empirical connections, or if the discipline is under the influence of men with an exceptionally well-developed taste. But there is a grave danger that the subject will develop along the line of least resistance, that the stream, so far from its source, will separate into a multitude of insignificant branches, and that the discipline will become a disorganized mass of details and complexities. In other words, at a great distance from its empirical source, or after much ‘abstract’ inbreeding, a mathematical subject is in danger of degeneration. At the inception the style is usually classical; when it shows signs of becoming baroque the danger signal is up. It would be easy to give examples, to trace specific evolutions into the baroque and the very high baroque, but this would be too technical. In any event, whenever this stage is reached, the only remedy seems to me to be the rejuvenating return to the source: the reinjection of more or less directly empirical ideas. I am convinced that this is a necessary condition to conserve the freshness and the vitality of the subject, and that this will remain so in the future. ::: John von Neumann

Right before I left the Salk Institute, I was chatting with an older scientist and he said something along the following lines (paraphrasing):

The best science is done when you can identify a clear mechanism that you can test; [anonymized scientist] was known for having a lot of confirmatory evidence that pointed at some result, but nothing conclusive, no mechanism. Pretty much all of it ended up being wrong.

Basically, he was of the opinion that you can provide evidence of a direct mechanism, or you can provide evidence for a general idea that is consistent and points to that mechanism like so:

Experimental designWhere you should you collect your evidence? Each situation makes one of these arrows easier to collect than others.

But if you want to maximize your likelihood of making a lasting impact on knowledge, where do you want to place your bets? Can theories come before mechanism?

I don’t know.

 

Who is getting hired in neuroscience?

I am always a bit jealous by how organized the field of academic economics is when compared to, well, anyone else. To get an academic job, young economists put up their one “job paper” into some sort of database for prospective employers to evaluate (also, they do not do postdocs). This gives them a large dataset to analyze. fivethirtyeight has a nice analysis of what the people looking for an academic economics job are working on (there’s more in the link):

Neuroscience does not have an equivalent database, unfortunately. But I do run the neurorumblr, which aggregates neuroscience faculty job postings. They often break down what type of research they are looking for candidates to accomplish into broad categories. There are currently ~95 job postings: here is what they are looking for.

Neuroscience jobs
I was surprised by the number of computational positions; a large chunk of them are computational and cognitive which leads me to think they may be EEG/fMRI postings? I’m not sure.

Also, “cognitive” is the new “psychology”.

 

What is the future of the brain?

I recently read Gary Marcus and Jeremy Freeman’s new book, The Future of the Brain and was so enthusiastic about it I decided to write a review. And got a bit carried away. Oh well. Hopefully it makes sense:

The past two decades has seen an explosion in tools that can dissect and record signals in the brain. Diverse sets of molecules that allow investigation of tens to hundreds of neurons simultaneously has drastically improved our spatial knowledge of the brain. Light-activated ion channels combined with genetics have allowed us to precisely label and manipulate specific types of neurons. What was once a field devoted to such physics-era concepts of electrodes and membrane voltages is slowly moving in the direction of molecular biology, with signaling cascades and custom-made viruses being the tools of the day.

What we would like to understand, though, is what are the tools of tomorrow? Where is neuroscience heading? The Future of the Brain, edited by Gary Marcus and Jeremy Freeman, collects essays from a series of neuroscientists on the direction research is moving. Importantly for a field as variegated as neuroscience, every essay has a distinct take on what is the important direction in which to move. But several themes emerge.

The best thing I can say about this book is that it made me stop and think. Most books about the brain I kind of skim because I already generally know the topic and few new ideas are put forth. The Future of the Brain important thoughts that need to be grappled with and is filled with things that I did not know.

If you are still looking to get a present for someone who is interested in neuroscience, I’d give them this book.

Monday open question: does fMRI activation have a consistent meaning?

Reports from fMRI rely, somewhat implicitly, on a rate-coding model of populations of neurons in the brain. More activity means more activation, and more activation usually means roughly the same thing. Useful, but misleading. How much should we rely on the interpretation that an area having similar activation in two different behaviors means the same thing? Neuroskeptic covers one such finding:

The authors are Choong-Wan Woo and colleagues of the University of Colorado, Boulder. Woo et al. say that, based on a new analysis of fMRI brain scanning data, they’ve found evidence inconsistent with the popular theory that the brain responds to the ‘pain’ of social rejection using the same circuitry that encodes physical pain. Rather, it seems that although the two kinds of pain do engage broadly the same areas, they do so in very different ways.

Roughly, the use a cool new statistical technique to measure activity in more oblique ways: combinations of activity have more meaning than they may have in the past.

The basic question here is: given that we know small regions can have multiple ‘cognitive’ meanings depending on the context of the entire network – or specifically which neurons in the region itself – are active, how much can we compare ‘activity’ signals between (or even within!) behaviors?

Obviously sometimes it will be entirely fine. Other times it won’t. Is there an obvious line?

Why is reporting on health and science so bad? Because the reporters can’t do their jobs.

Imagine this scenario: a sports reporter is asked to cover an emerging conflict in the Middle East. The sports reporter, never particularly keen on international affairs, is on a deadline and looks to see what they can use. There’s in-person video of the central events in question, but our journalist friend doesn’t have the necessary background or context to fully understand what happened. Is there something else? A press release from the US government and from one side of the conflict in the Middle East? Sounds like our sportsman is good to go! Just copy and paste the exciting bits, add in the little bit of context that our intrepid soul already has, and bingo. News has been reported!

Later, it turns out that our poor reporter has been duped! The press release from the Middle East was nothing but PR, empty words of propaganda to make things seem more important and interesting than they really are! Our friend from the sports section sighs, wishing he had asked someone who knew about this kind of thing who would have known what to look out for.

In a similar vein, Vox has an article asking why so many articles on health (and, let’s admit it, science) are junk. The culprit is identified as clearly as in our example above: coverage by those who don’t know, or don’t care. See:

The researchers found that university press offices were a major source of overhype: over one-third of press releases contained either exaggerated claims of causation (when the study itself only suggested correlation), unwarranted implications about animal studies for people, or unfounded health advice.

…When a press release included actual health advice, 58 percent of the related news articles would do so too (even if the actual study did no such thing). When a press release confused correlation with causation, 81 percent of related news articles would. And when press releases made unwarranted inferences about animal studies, 86 percent of the journalistic coverage did, too.

…Unfortunately, however, this isn’t a perfect world. Many journalists are often covering science in the morning, and the courts in the afternoon. We are under heavy pressure to meet multiple deadlines every day, and some of us lack the time, sources, or background to properly vet the studies we’re reporting on.

So we rely on scientists and on press offices to guide us through research, even though, clearly, we shouldn’t.

Wait – what? The problem is the scientists and press offices? Because reporters are too overworked or unqualified to do their job properly? It sounds from the quote above that reporters are just parroting what a press release says without actually reading the source material. It sounds like reporters aren’t doing their jobs. But rather than accept the blame, they are trying to avoid the responsibility.

Unless I am mistaken, the job of a journalist is not to overlay press releases with a thin veneer of impartiality. Their job is to synthesize new information with their existing bank of expertise in order to convey to a naive audience what is or isn’t novel or important. Conversely, the job of a PR department – which derives from the incentive structure – is quite clearly to hype new research. Does anyone think that a press release from a corporation is written to be as truthful as possible, rather than putting as good of a spin on it as possible?

If the reporter knew enough about the field, they would be able to check whether or not the things they were writing were true. Where in the paper does it say this correlation exists? Is there an exaggeration? How much?

If they are unable to do that, what are they doing? Why should I read science or health journalism if they are unable to discern fact from fiction?

Fight club for flies

I’ve been watching a lot of fly behavior recently and it’s pretty spectacular how easy it is to imagine you’re looking at a mammal (just smaller, smellier, and with more legs.) They wander around, clean themselves off, rub their greedy little hands together, fight, and sing.

Watch the very good video above to see how these guys fight each other. It’s about work from David Anderson’s lab on aggression and tachykinin, aka Substance P.