Orchid mantis: more interesting than cryptic mimicry

12089785554_bcc5a5c50c_h

I know, I know, you read the title and exclaim: what can be more exciting than cryptic mimicry?! Well, listen to this:

On the face of it, this is a classic evolutionary story, and a cut-and-dried case: the mantis has evolved to mimic the flower as a form of crypsis – enabling it to hide among its petals, feeding upon insects that are attracted by the flower…

O’Hanlon and colleagues set about systematically testing the ideas contained within the traditional view of the orchid mantis’ modus operandi. First, they tested whether mantises actually camouflage amongst flowers, or, alternatively, attract insects on their own…

However, when paired alongside the most common flower in their habitat, insects approached mantises more often than flowers, showing that mantises are attractive to insects by themselves, rather than simply camouflaging among the flowers…Surprisingly mantises did not choose to hide among the flowers. They chose leaves just as often. Sitting near flowers did bring benefits, though, because insects were attracted to the general vicinity – the “magnet effect”.

But wait: there’s more!

As an aside, I’ve heard that Preying Mantis’ make great pets. They are social creatures that will creepily watch you everywhere you go, but also kind of ignore you. They’re like insect-cats.

(Photo from)

Science blogs: still kinda there, I guess

I have bemoaned the lack of a neuroscience blogosphere before. Neuroscience blogs exist as independent fiefdoms, rarely responding to one another. And if we were to cut out the cognitive and psychological sides of neuroscience, the field of blogs would be more like a field of half-grown trees cut down and abandoned, with only a rare leaf or two peaking out of the desiccation.

So in the interests of navel-gazing, it is interesting to think about a post from DynamicEcology (Blogs are dying; long live science blogs):

The classic blog is “the unedited voice of an author”, who thinks out loud over an extended period of time and carries on an open-ended conversation with readers who like that author enough to read a significant fraction of his or her posts. That turns out to be a poor way to make money compared to the alternatives, which is a big reason blogs as a whole are dying. Another reason blogs as a whole are dying is that some of things they used to be for are better done via other means (e.g., Twitter for sharing links, various apps for sharing photos and videos). A third reason is that not that many people actually want to blog…

Fortunately, most of the reasons why blogs as a whole are dying don’t apply to science blogs written by academics. Academic scientists have day jobs that often pay pretty well, and tenured ones have as much job security as anyone ever does. Academics don’t need to make money from blogs, they can do it for real but intangible rewards…

So how come there’s no ecology blogosphere? And how come many ecology blogs either have died or post much less often than they used to (e.g., Just Simple Enough*, Jabberwocky Ecology)? And how come new ecology blogs are so scarce, and mostly peter out after only a few posts without ever building much of an audience? Not that you’d expect most ecologists to blog, but so few puzzles me a little. And it’s not just a puzzle for ecology, since there’s no blogosphere worthy of the name for any scholarly field except economics

But Paige Brown Jarreau actually studies this and is writing a dissertation on this. Here is what she has to say:

Many science bloggers I interviewed and surveyed talked about their blogs today as a place for extended thoughts from Twitter and other “faster” social media streams. According to my dissertation data, academics and science writers alike continue to use their blogs…

– as a home for their writing

– as a portfolio

– as a place to be able to write without strict editorial oversight

– as a place to stick extras that don’t fit elsewhere, either in the academic publishing world or in the larger science content ecosystem

– as a place for opinion, interpretation, analysis and curation

– as a place to cover in depth the stories and scientific papers not being covered by the media (what I call Ecosystem Blogging, or covering what’s missing from the existing content ecosystem)

– as a place to add context missing from news and social media

And here is her fantastic network diagram of how blogs are linked (I have a small little dot in between the neuroscience blogs and the ecology blogs, ironically):

BlogsRead_ModularityClass3_InDegreeSize (1)

I only started blogging something like a year or two ago so I certainly couldn’t tell you if blogs are dying or growing or changing or what. Things seem pretty much the same to me. There are a lot of blogs about science and science culture; there are a lot of blogs explaining science to a lay audience; there are a few blogs that discusses the science at a professional level. But I know that there is demand for it; every conference I go to, I meet people who read my blog.

But we can’t pretend that the community isn’t fragmenting in strange ways. Last week, I posted one of my intermittent Monday Open Questions. It got 0 comments on my blog. However! It go comments on Google+ and tons on Twitter. There was a lot of discussion – it just routed around my blog. Blogs aren’t hubs for discussion and interaction they are the start of the conversation.

I always find it a bit of a shame because it is hard to make everything accessible to a large audience. I know there are people who read this blog through my RSS feed, and who read it through G+, and who read it through Twitter, and who just come to it every so often. And they are going to have very different experiences with it.

(As an addendum: it would be quite nice if there was a way to automatically grab responses to specific blog posts on twitter/G+ and embed them in the comments section.)

#Cosyne2015, by the numbers

 

Cosyne2015_posters

Another year, another Cosyne. Sadly, I will be there only in spirit (and not, you know, reality.) But I did manage to get my hands all over the Cosyne abstract authors data…I can now tell you everyone who has had a poster or talk presented there and who it was with. Did you know Steven Pinker was a coauthor on a paper in 2004?!

This year, the winner of the ‘most posters’ award (aka, the Hierarch of Cosyne)  goes to Carlos Brody. Carlos has been developing high-throughput technology to really bang away at the hard problem of decision-making in rodents, and now all that work is coming out at once. Full disclosure notice, his lab sits above me and they are all doing really awesome work.

Here are the Hierarchs, historically:

  • 2004: L. Abbott/M. Meister
  • 2005: A. Zador
  • 2006: P. Dayan
  • 2007: L. Paninski
  • 2008: L. Paninski
  • 2009: J. Victor
  • 2010: A. Zador
  • 2011: L. Paninski
  • 2012: E. Simoncelli
  • 2013: J. Pillow/L. Abbott/L. Paninski
  • 2014: W. Gerstner
  • 2015: C. Brody

CosyneAll_posters

Above is the total number of posters/abstracts by author. There are prolific authors, and there is Liam Paninski. Congratulations Liam, you maintain your iron grip as the Pope of Cosyne.

As a technical note, I took ‘unique’ names by associating first letter of the name with last name. I’m pretty sure X. Wang is at least two or three different people and some names (especially those with an umlaut or, for some reason, Paul Schrater) are especially likely to change spelling from year to year. I tried correcting a bit, but fair warning.

Power law 2004-2015

 

As I mentioned last year, the distribution of posters follows a power law.

But now we have the network data and it is pretty awesome to behold. I was surprised that if we just look at this year’s posters, there is tons of structure (click here for a high-res, low-size PDF version):
ajcCOSYNE_2015_small_image

When you include both 2014 and 2015, things get even more connected (again, PDF version):

ajcCOSYNE_2014-2015_small_image

Beyond this it starts becoming a mess. The community is way too interconnected and lines fly about every which way. If anyone has an idea of a good way to visualize all the data (2004-2015), I am all ears. And as I said, I have the full connectivity diagram so if anyone wants to play around with the data, just shoot me an email at adam.calhoun at gmail.

Any suggestions for further analyses?

 

The not-so frivolous function of play?

We play. Cats play. Dogs play. Horses play. Do fish play? Do cockroaches play? What is the function of play?!

[P]lay is actually at the center of a spectrum of three behavior types: [exploration, play, and stereotypies]. Both exploration and stereotypic behaviors can be easily mistaken for play. Exploration refers to an animal’s reaction to a novel environment or stimuli. For example, if you give a child a new toy, they will generally eagerly take it and examine and manipulate it. However, after thoroughly investigating the new toy, the child may toss it aside and play with their favorite beat-up GI Joe doll…

This doesn’t mean that every species plays, mind you; certainly not every mammal species. Even closely related groups can be vastly different- rats play mountains more than mice do, for example, and some species like aardvarks don’t appear to play at all. Still, almost every major group of mammals has some representatives that show play behavior…

Despite the popular conception that play is practice for later life skills, there is almost zero evidence to back it up. Cats who pounced and batted at objects as kittens were no better at hunting than cats with limited object play;  the same went for coyotes and grasshopper mice. Rats, meerkats, wolves, and many primate species are no better at winning fights based on how often they play fight as youngsters.

Did you know that there is a ‘preeminent play scientist’ and he has five criteria to define play? They are:

  1. The performance of the behavior is not fully functional in the form or context in which it is expressed; that is, it includes elements, or is directed towards stimuli, that do not contribute to current survival.

  2. The behavior is spontaneous, voluntary, intentional, pleasurable, rewarding, reinforcing, or autotelic (done for its own sake).

  3. It differs from the “serious” performance of ethotypic behavior structurally ortemporally in at least one respect: it is incomplete (generally through inhibited or dropped final element), exaggerated, awkward, or precocious; or it involves behavior patterns with modified form, sequencing, or targeting.

  4. The behavior is performed repeatedly in a similar, but not rigidly stereotyped, form during at least a portion of the animal’s ontogeny.

  5. The behavior is initiated when the animal is adequately fed, healthy, relaxed, and free from stress (e. g. predator threat, harsh microclimate, social instability) or intense competing systems (e. g. feeding, mating, predator avoidance).

You have to go read the full article, if for nothing other than all the adorable videos of animals playing.

This is much, much better than that really dumb David Graeber article that science needs to be about play and fun.

Monday Open Question: The unsolved problems of neuroscience?

Over at NeuroSkeptic, there was a post asking “what are the unsolved problems of neuroscience”? For those interested in this type of questions, there are more such questions here and here. This, obviously, is catnip to me.

Modeled on Hilbert’s famous 23 problems in mathematics, the list comes from Ralph Adolphs and has questions such as “how do circuits of neurons compute?” and “how could we cure psychiatric and neurological diseases?” For me, I found the meta-questions most interesting:

Meta-question 1: What counts as understanding the brain?

Meta-question 2: How can a brain be built?

Meta-question 3: What are the different ways of understanding the brain?

But the difference between the lists from Hilbert and Adolphs is very important: Hilbert asked precise questions. The Adolphs questions often verge on extreme ambiguity.

Mathematics has an advantage over biology in its precision. We (often) know what we don’t know. Is neuroscience even at that point? Or would it be more fruitful to propose a systematic research plan?

Me, I would aim my specific questions at something more basic and precise than most of those on the list. For the sake of argument, here are a couple possible questions:

  • Does the brain compute Bayesian probabilities, and if so how? (Pouget says yes, Marcus says no?)
  • How many equations are needed to model any given process in the nervous system?
  • How many distinct forms of long-term potentiation/depression exist?

So open question time:

What (specific) open question do you think is most important?

or What are some particularly fruitful research programs? I am thinking in relation to the Langlands program here.

How Deep Mind learns to win

About a year ago, DeepMind was bought for half a billion dollars by Google for creating software that could learn to beat video games. Over the past year, DeepMind has detailed how they did it.

deepmindvsevolution

Let us say that you were an artificial intelligence that had access to a computer screen, a way to play the game (an imaginary video game controller, say), and its current score. How should it learn to beat the game? Well, it has access to three things: the state of the screen (its input), a selection of actions, and a reward (the score). What the AI would want to do is find the best action to go along with every state.

A well-established way to do this without any explicit modeling of the environment is through Q-learning (a form of reinforcement learning). In Q-learning, every time you encounter a certain state and take an action, you have some guess of what reward you will get. But the world is a complicated, noisy place, so you won’t necessarily always get the same reward back in seemingly-identical situations. So you can just take the difference between the reward you find and what you expected, and nudge your guess a little closer.

This is all fine and dandy, though when you’re looking at a big screen you’ve got a large number of pixels – and a huge number of possible states. Some of them you may never even get to see! Every twist and contortion of two pixels is, theoretically, a completely different state. This would make it implausible to check each state, choose the action and play it again and again to get a good estimate of reward.

What we could do, if we were clever about it, is to use a neural network to learn features about the screen. Maybe sometimes this part of the screen is important as a whole and maybe other times those two parts of the screen are a real danger.

But that is difficult for the Q-learning algorithm. The DeepMind authors list three reasons: (1) correlations in sequence of observations, (2) small updates to Q significantly change the policy and the data distribution, and (3) correlations between action values and target values. It is how they tackle these problems that is the main contribution to the literature.

The strategy is to implement a Deep Convolutional Neural Network to find ‘filters’ that can more easily represent the state space. The network takes in the states – the images on the screen – processes them, and then outputs a value. In order to get around problems (1) and (3) above (the correlations in observations), they take a ‘replay’ approach. Actions that have been taken are stored into memory; when it is time to update the neural network, they grab some of the old state-action pairs out of their bag of memories and learn from that. They liken this to consolidation during sleep, where the brain replays things that had happened during the day.

Further, even though they train the network with their memories after every action, this is not the network that is playing the game. The network that is playing the game stays in stasis and only ‘updates itself’ with what it has learned after a certain stretch of time – again, like it is going to “sleep” to better learn what it had done during the day.

Here is an explanation of the algorithm in a hopefully useful form:

DeepMind

Throughout the article, the authors claim that this may point to new directions for neuroscience research. This being published in Nature, any claims to utility should be taken with a grain of salt. That being said! I am always excited to see what lessons arise when theories are forced to confront reality!

What this shows is that reinforcement learning is a good way to train a neural network in a model-free way. Given that all learning is temporal difference learning (or: TD learning is semi-Hebbian?), this is a nice result though I am not sure how original it is. It also shows that the replay way of doing it – which I believe is quite novel – is a good one. But is this something that  sleep/learning/memory researchers can learn from? Perhaps it is a stab in the direction of why it is useful (to deal with correlations).

References

Mnih V, Kavukcuoglu K, Silver D, Rusu AA, Veness J, Bellemare MG, Graves A, Riedmiller M, Fidjeland AK, Ostrovski G, Petersen S, Beattie C, Sadik A, Antonoglou I, King H, Kumaran D, Wierstra D, Legg S, & Hassabis D (2015). Human-level control through deep reinforcement learning. Nature, 518 (7540), 529-533 PMID: 25719670

Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Alex Graves, Ioannis Antonoglou, Daan Wierstra, & Martin Riedmiller (2013). Playing Atari with Deep Reinforcement Learning arXiv arXiv: 1312.5602v1

Is it white and gold? Is it blue and black?

B-0MqaKUUAIz_S_

By now, I am sure that you have seen this picture. Some people see it as blue and black and some people see it as white and gold. Two people can be sitting right next to each other and see totally different things! It happened to me last night.

Wired attempts to explain it:

“What’s happening here is your visual system is looking at this thing, and you’re trying to discount the chromatic bias of the daylight axis,” says Bevil Conway, a neuroscientist who studies color and vision at Wellesley College. “So people either discount the blue side, in which case they end up seeing white and gold, or discount the gold side, in which case they end up with blue and black.” (Conway sees blue and orange, somehow.) [ed. that makes her the devil.]

Essentially, it is an issue of color constancy: that the color we perceive is due to its context. Brightening and darkening the image supports that:

See also XKCD:

But that explains one, trivial why – why one ‘color’ can look different depending on context. What it does not explain is why some people see it as white and gold and others see it totally the opposite. Why is there this individual level variation?

It seems to exist right on some threshold: some people have an in-built or learned bias to favor – well, something. Light images? Dark images? Overhead light? And others have a different bias. If it was light or dark, presumably you could lock five people in a closet and when they came out they would see it one way (maybe blue and black). Push five others out in the sun and they’d see it differently (white and gold). But I haven’t seen a good explanation of this nor why it is so bimodal. I would bet someone money there will be a scientific paper on this illusion published within the next year or two.

In conclusion, it’s white and gold because that’s all I can see. Case closed.

Posted in Art

Microcircuits are SO HOT right now

gscholar microcircuits

So hot.

We use the tools that we have, and right now that means genetic specificity with calcium imaging and channelrhodopsin. In other words: how do groups of identified neurons operate? In even fewer words: microcircuits.

I am probably reading too much into things, but it seems like microcircuits are the new hotness. Every week, there’s a new paper using the word (soon to solidify its buzzword status). I looked up the publications that used the term and found something interesting. Compare the number of publications I found through google scholar* (above) – which indexes a very broad and interdisciplinary mix of journals – and pubmed (below) – which indexes mostly biomedical journals:

pubmed microcircuits

The number of publications in google scholar is fairly steady until 1999 when it starts steadily increasing. There’s very little action in pubmed until 2002 when it starts rocketing off. What’s happening? Many of the papers on google scholar have a computational or physics-y bent, appearing in such exciting places as the ‘ International Journal of Man-Machine Studies’. For years, these poor computational fools labored away not to be noticed until the experimental tools caught up to the theory: hence the sudden interest.

The very first reference that I can find is A reinterpretation of the mathematical biophysics of the central nervous system in the light of neurophysiological findings by N. Rashevksy in 1945. Yes, that Rashevsky. Unfortunately, I can’t seem to get access to the paper itself. Same goes for the next paper in 1957 from D. Stanley-Jones (Reverberating circuits). And then it took off from there…

In a different tradition, we can trace this fine term to Eric Kandel who appears to have coined its neuroscience term in this 1978 paper where they “reduced this isolated reflex to a microcircuit (consisting of a single sensory cell and single motor cell) so as to causally relate the contribution of individual cells to the expression and plastic properties of the behavior.” Nary a peep was heard from microcircuitry until 1991 when Douglas and Martin mapped a “function microcircuit for visual cortex”.

(What is the equivalent of Mainen and Sejnowski in microcircuits? Someone has to write that crisp paper so that they, too, can get ALL the citations.)

* technically, I searched for ‘microcircuits neural’

The Chronicle vs. The Human Brain Project

In case you haven’t seen this hilariously vicious anti-Human Brain Project article:

If you want to make a neuroscientist scoff, mention the billion-dollar-plus Human Brain Project…Even before it began, the project was ridiculed by those in the know. Words like “hooey” and “crazy” were thrown around, along with less family-friendly terms…Almost no one—except for those on the project’s ample payroll—seemed to think it was a good idea.

In reply to an interview request, Konrad Kording, a neuroscientist at Northwestern University, wrote back: “Why do you want to talk about this embarrassing corpse?” He added a smiley emoji, but he’s not really kidding. Mr. Kording has nothing nice to say about a project that, according to him, has become a reliable punchline among his colleagues. “I’m 100-percent convinced that virtually all the money spent on it will lead to no insight in neuroscience whatsoever,” he said. “It’s a complete waste. It’s boneheaded. It’s wrong. Everything that they’re trying to do makes no sense whatsoever.”

Jeremy Freeman is similarly skeptical, if a touch more diplomatic. Mr. Freeman, a neuroscientist at the Howard Hughes Medical Institute, sees it as “kind of an absurd project” and misguided to boot. “Insofar as the goal is to establish a working simulation of the entire human brain, or even a single cortical column, I believe that it’s premature,” he said, chuckling. “I also don’t think rushing toward a simulation is the right avenue for progress.”

et cetera. I mean, regardless of how you feel about the project you have to appreciate inspired academic vitriol when you see it (unless you are the target of it, obviously).

Konrad Koerding’s objections to the HBP are probably more informative, however (more detail in link):

1) We lack the knowledge needed to build meaningful bottom up models and I will just give a few examples:
a) We know something about a small number of synapses but not how they interact
b) We know something about a small number of cell types, but not about the full multimodal statistics (genes, connectivity, physiology)

The degree of the lack of knowledge is mindboggling. There are far more neurons in the brain than people on the planet.

2) We do not know how to combine multimodal information

3) We do not know what the right language is for simulating the brain.