The ballerina illusion

No matter how hard I try, I cannot get the spinning dancer illusion to flip. Looks like someone has found a much more powerful version of the illusion:

spinning-optical-illusion

What I especially like about this version of the illusion is I can totally get why it is happening. There aren’t great depth cues so there is a powerful prior that if you can see a face then that face is looking in your direction – hence direction flipping.

(via kottke)

Posted in Art

These are the Computational [and Systems] Neuroscience Blogs (updated)

I was recently asked which blogs deal with Computational Neuroscience. There aren’t a lot of them – most neuroscience blogs are very psych/cog focused because, honestly, that’s what the majority of the public cares about. Here are all of the ones that I know of (I am including Systems Neuro because it can be hard to disambiguate these things):

Interesting (Computational) Neuroscience Papers

Pillow Lab Blog

Memming

Anne Churchland

Bradley Voytek

xcorr

Quasiworking memory

Paxon Frady’s blog

Its Neuronal

Romaine Brett’s Blog

There is one other that I am blanking on and cannot find in my feedly right now. I will update later, and would welcome any suggestions!

 

RIP Marvin Minsky, 1927-2016

Marvin Minsky in Detroit

I awoke to sad news this morning – Marvin Minsky passed away at the age of 88. Minsky’s was the first serious work on artificial intelligence that I ever read and one of the reasons I am in neuroscience today.

Minsky is infamously known for his book Perceptrons, which most famously showed that the neural networks at the time had problems with computations such as XOR (here is the solution, which every neuroscientist should know!).

Minsky is also known for the Dartmouth Summer Research Conference, whose proposal is really worth reading in full.

Fortunately, Minsky put many of his writings online which I have been rereading this morning. You could read his thoughts on communicating with Alien Intelligence:

All problem-solvers, intelligent or not, are subject to the same ultimate constraints–limitations on space, time, and materials. In order for animals to evolve powerful ways to deal with such constraints, they must have ways to represent the situations they face, and they must have processes for manipulating those representations.

ECONOMICS: Every intelligence must develop symbol-systems for representing objects, causes and goals, and for formulating and remembering the procedures it develops for achieving those goals.

SPARSENESS: Every evolving intelligence will eventually encounter certain very special ideas–e.g., about arithmetic, causal reasoning, and economics–because these particular ideas are very much simpler than other ideas with similar uses.

He also mentions this, which sounds fascinating. I was not aware of this but cannot find the actual paper. If anyone can send me the citation, please leave a comment!

A TECHNICAL EXPERIMENT. I once set out to explore the behaviors of all possible processes–that is, of all possible computers and their programs. There is an easy way to do that: one just writes down, one by one, all finite sets of rules in the form which Alan Turing described in 1936. Today, these are called “Turing machines.” Naturally, I didn’t get very far, because the variety of such processes grows exponentially with the number of rules in each set. What I found, with the help of my student Daniel Bobrow, was that the first few thousand such machines showed just a few distinct kinds of behaviors. Some of them just stopped. Many just erased their input data. Most quickly got trapped in circles, repeating the same steps over again. And every one of the remaining few that did anything interesting at all did the same thing. Each of them performed the same sort of “counting” operation: to increase by one the length of a string of symbols–and to keep repeating that. In honor of their ability to do what resembles a fragment of simple arithmetic, let’s call these them “A-Machines.” Such a search will expose some sort of “universe of structures” that grows and grows. For our combinations of Turing machine rules, that universe seems to look something like this:

minsky turing machines

In Why Most People Think Computers Can’t, he gets off a couple of cracks at people who think computers can’t do anything humans can:

Most people assume that computers can’t be conscious, or self-aware; at best they can only simulate the appearance of this. Of course, this
assumes that we, as humans, are self-aware. But are we? I think not. I
know that sounds ridiculous, so let me explain.

If by awareness we mean knowing what is in our minds, then, as every  clinical psychologist knows, people are only very slightly self-aware, and  most of what they think about themselves is guess-work. We seem to build  up networks of theories about what is in our minds, and we mistake these  apparent visions for what’s really going on. To put it bluntly, most of  what our “consciousness” reveals to us is just “made up”. Now, I don’t  mean that we’re not aware of sounds and sights, or even of some parts of  thoughts. I’m only saying that we’re not aware of much of what goes on inside our minds.

Finally, he has some things to say on Symbolic vs Connectionist AI:

Thus, the present-day systems of both types show serious limitations. The top-down systems are handicapped by inflexible mechanisms for retrieving knowledge and reasoning about it, while the bottom-up systems are crippled by inflexible architectures and organizational schemes. Neither type of system has been developed so as to be able to exploit multiple, diverse varieties of knowledge.

Which approach is best to pursue? That is simply a wrong question. Each has virtues and deficiencies, and we need integrated systems that can exploit the advantages of both. In favor of the top-down side, research in Artificial Intelligence has told us a little—but only a little—about how to solve problems by using methods that resemble reasoning. If we understood more about this, perhaps we could more easily work down toward finding out how brain cells do such things. In favor of the bottom-up approach, the brain sciences have told us something—but again, only a little—about the workings of brain cells and their connections.

Apparently, he viewed the symbolic/connectionist split like so:

minsky connectionist vs symbolic

Alert: empirical parasites are taking advantage of data scientists

The aerial view of the concept of collecting data is beautiful. What could be better than high-quality information carefully examined to give a p-value less than .05? The potential for leveraging these results for narrow papers in high-profile journals, never to be checked except by other independent studies costing thousands – tens of thousands – is a moral imperative to honor those who put the time and effort into collecting that data.

However, many of us who have actually performed data analyses, managed large data sets and analyses, and curated data sets have concerns about the details. The first concern is that someone who is not regularly involved in the analysis of data may not understand the choices involved in statistical testing. Special problems arise if data are to be combined from independent experiments and considered comparable. How heterogeneous were the study populations? Does the underlying data fulfill the assumptions for each test? Can it be assumed that the differences found are due to chance or improper correction for complex features of the data set?

A second concern held by some is that a new class of research person will emerge – people who have very little mathematical and computational training but analyze data for their own ends, possibly stealing from the research productivity of those who have invested much of their career in these very areas, or even to use the data to try to prove what the original investigators had posited before data collection! There is concern among some front-line researchers that the system will be taken over by what some researcher have characterized as “empirical parasites”.

Wait wait, sorry, that was an incredibly stupid argument. I don’t know how I could have even come up with something like that… It’s probably something more like this:

A second concern held by some is that a new class of research person will emerge — people who had nothing to do with the design and execution of the study but use another group’s data for their own ends, possibly stealing from the research productivity planned by the data gatherers, or even use the data to try to disprove what the original investigators had posited. There is concern among some front-line researchers that the system will be taken over by what some researchers have characterized as “research parasites.”

Yes, that’s it, open science could lead the way to research parasites analyzing other people’s data. I now look forward to the many other subtle insights on science that the editors of NEJM have to say.

Friday Fun: This newly-discovered spider courts by playing peek-a-boo

Watch these spiders seductively waggle their fingers at each other in their search for love. Then go read this paper from the Hoy lab in which they perform the first neural recordings from a jumping spider!

Enjoy your blizzard, Northeasterners.

Doudnagate

“They claimed merely to be scientists discovering facts; [I] doggedly argued that they were writers and readers in the business of being convinced and convincing others.”

“By being sufficiently convincing, people will stop raising objections altogether, and the statement will move toward a fact-like status. Instead of being a figment of one’s imagination (subjective), it will become a “real objective thing,” the existence of which is beyond doubt.”

Latour, Laboratory Life

When reading any review article in science, the most interesting thing is not what is said, but what the reviewer has chosen to say. Scientific knowledge is both vast and deep and every author surveying this territory must pick and choose from among it. More importantly, what the author chooses to say broadcasts both what and how they are thinking about the topic. Like a philosophy, this can give a careful reader a new way to think about, say, retinal ganglion cells, or information maximization, or, I don’t know, the history of CRISPR.

So: if you are Eric Lander, eminent biologist, who would you go about shaping your history of CRISPR? Especially given that it is in the midst of a vicious patent battle between Berkeley and the Broad Institute (that you happen to head), between Doudna and Zhang (your colleague)?

Apparently you would do it in a way that would piss a lot of people off.

Furthermore, Jennifer Doudna of the University of California, Berkeley—who, along with Emmanuelle Charpentier of the Helmholtz Centre for Infection Research in Germany, is currently locked in the patent dispute with the Broad’s Feng Zhang and colleagues—called Lander’s account “factually incorrect” in a January 17 PubMed Commons comment. Doudna wrote that Lander’s description of her lab “and our interactions with other investigators . . . was not checked by the author and was not agreed to by me prior to publication.”…

One of those scientists was George Church, who has appointments at Harvard and the Broad and has collaborated with Zhang and others on CRISPR research. “Eric [Lander] asked me some very specific questions on 14-Dec and I offered to fact check (as I generally do),” Church wrote in an email to The Scientist. “He sent me a preprint on 13-Jan (just hours before it came out in Cell).  I immediately sent him a list of factual errors, none of which have been corrected.”

Everything that I would say about this has already been said in voluminous form, mostly by Lior Pachtor:

All of the research papers referenced in the Lander perspective have multiple authors, on average about 7, and going up to 20. When discussing papers such as this, it is therefore customary to refer to “the PI and colleagues…” in lieu of crediting all individual authors. Indeed, this phrase appears throughout Lander’s perspective, where he writes “Moineau and colleagues..”, “Siksnys and colleagues..”, etc. It is understood that this means that the PIs guided projects and provided key insights, yet left most of the work to the first author(s) who were in turn aided by others…

But in choosing to highlight a “dozen or so” scientists, almost all of whom are established PIs at this point, Lander unfairly trivializes the contributions of dozens of graduate students and postdocs who may have made many of the discoveries he discusses, may have developed the key insights, and almost certainly did most of the work. For example, one of the papers mentioned in the Lander perspective is

and also by Dominic Berry:

My historical muscle reflex was provoked by a case detailed in Graeme Gooday and Stathis Arapostathis’Patently Contestable (2013) from the history of wireless telegraphy. After key patents were awarded to one Guglielmo Marconi at the end of the nineteenth century, a range of different histories of wireless telegraphy began to emerge, ones that more or less stressed the greater importance of other individual scientists, or the international collective. The ‘Arch-builders of Wireless Telegraphy’ as John Joseph Fahie called them in one of the earliest of these histories (1899), are brought together most evocatively in this visual representation from his book on the subject. Fahie’s intention, so Gooday and Arapostathis argue, was to decenter Marconi from this history, making his patent claims look less legitimate or at least less worthy.

And now on to the fun stuff: dirty dirty gossip.

Read this storify of Michael Eisen’s rant on the review.

Note that Jennifer Doudna had to comment on Pubmed because Cell didn’t approve her comment.

Note also that the Doudna lab retweeted this snarky tweet amidst other more scientifically-minded ones:

CZDvcvsUsAAcsuI

There were two recent popular press articles, one profiling Doudna and the other Zhang (with their own set of problems assigning credit). Turns out the one profiling Doudna was written by a Berkeley employee and didn’t have a COI statement (the backstory here is that this is being seen as going from Doudna vs. Zhang to a broader Berkeley vs. Broad Institute).

Go read the PubPeer comment thread which has fun facts like:

He also omits the fact that Doudna had already published 10 papers on CRISPR before her paper with Charpentier. “She had been using crystallography and cryo-EM to solve structures….” – no mention of any of the many insights from her work, in striking contrast to the detailed accounting of virtually all other investigators. The intended impression seems to be that she was a minor player, putting out a couple of technical observations, before her chance meeting with Charpentier.

Paul Knoepfler asked whether people thought Lander gave enough credit to Doudna and the twitterverse says no.

CY9cFktUMAA6Tc8

I feel like I’m missing something. Anything else? I do love gossip.

More seriously, this is your chance to see how history is actually made.

The eye(s)

4540426054_aeb92bdff8_o

(Photo by artofdreaming)

Ed Yong has an absolutely brilliant article in National Geographic about the receptical for neuroscience’s favorite sensory system, vision:

But simple eyes should not be seen as just stepping-stones along a path toward greater complexity. Those that exist today are tailored to the needs of their users. A sea star’s eyes—one on the tip of each arm—can’t see color, fine detail, or fast-moving objects; they would send an eagle crashing into a tree. Then again, a sea star isn’t trying to spot and snag a running rabbit. It merely needs to spot coral reefs—huge, immobile chunks of landscape—so it can slowly amble home. Its eyes can do that; it has no need to evolve anything better. To stick an eagle’s eye on a sea star would be an exercise in ludicrous excess…

“Insects and crustaceans have become so successful despite their compound eyes, not because of them,” says Nilsson. “They would have done so much better with camera-type eyes. But evolution didn’t find that. Evolution isn’t clever.”

Eric Warrant, Nilsson’s next-door neighbor at Lund University, takes a more lenient view. “Insect eyes have a much faster temporal resolution,” he says. “Two flies will chase each other at enormous speed and see up to 300 flashes of light a second. We’re lucky to see 50.” A dragonfly’s eye gives it almost complete wraparound vision; our eyes do not. And the elephant hawk moth, which Warrant has studied intensely, has eyes so sensitive that it can still see colors by starlight. “In some ways we’re better, but in many ways, we’re worse,” Warrant says. “There’s no eye that does it all better.” 
Our camera eyes have their own problems. For example, our retinas are bizarrely built back to front. The photoreceptors sit behind a tangled web of neurons, which is like sticking a camera’s wires in front of its lens. The bundled nerve fibers also need to pass through a hole in the photoreceptor layer to reach the brain. That’s why we have a blind spot. There’s no benefit to these flaws; they’re just quirks of our evolutionary history.

It does make you wonder what it is like to be a bat, so to speak. Consider the scallop:

The mantle of the bay scallop (Argopecten irradians) is festooned with up to 100 brilliant blue eyes. Each contains a mirrored layer that acts as a focusing lens while doubling the chance of capturing incoming light.

What is it like to have access to 100 eyes? It is not so simple as just imagining that you could ‘see more’. Our eyes sense more than what we just see (so to speak). In human retinas, melanopsin isn’t used to form images but to help entrain circadian rhythms. These neurons send information to a different part of the brain (the suprachiasmatic nucleus) in a way that we can fundamentally feel in a different way. Now imagine those 100 eyes: are they all there for the same thing? Is the feeling of one the same as the feeling of another?

 

Recent news in journals

Have you missed the recent hubub about Frontiers? Neuroconscience has this to say:

Lately it seems like the rising tide is going against Frontiers. Originally hailed as a revolutionary open-access publishing model, the publishing group has been subject to intense criticism in recent years. Recent issues include being placed on Beall’s controversial ‘predatory publisher list‘, multiple high profile disputes at the editorial level, and controversy over HIV and vaccine denialist articles published in the journal seemingly without peer review. As a proud author of two Frontiers articles and former frequent reviewer, these issues compounded with a general poor perception of the journal recently lead me to stop all publication activities at Frontiers outlets…

And this from the comments:

My husband, who is in math, had an entirely different experience. He was asked to be an *editor* in a field where he has just one paper. He explained that it’s not really his field – so far so good. The response of Frontiers? Won’t you please please still consider being an editor? This is just bad. If he had accepted (and people do accept all sorts of things for career advancement), he wouldn’t have been in a position to adequately judge the quality of the incoming papers or reviews.

In broader journal news, there is a blog post up at Frontiers about impact factor with this cool chart:

rejection-rate

Obviously the journals do not get the same set of submissions so in a sense this has severe selection bias.

Bjorn Brembs has been on a roll about journals and brought up something that I had no idea about: journals can, to a certain extent, negotiate their impact factor!

One of the first accounts to show how a single journal accomplished this feat were Baylis et al. in 1999 with their example of FASEB journal managing to convince the ISI to remove their conference abstracts from the denominator, leading to a jump in its impact factor from 0.24 in 1988 to 18.3 in 1989. Another well-documented case is that of Current Biology whose impact factor increased by 40% after acquisition by Elsevier in 2001. To my knowledge the first and so far only openly disclosed case of such negotiations was PLoS Medicine’s editorial about their negotiations with Thomson Reuters in 2006, where the negotiation range spanned 2-11 (they settled for 8.4). Obviously, such direct evidence of negotiations is exceedingly rare and usually publishers are quick to point out that they never would be ‘negotiating’ with Thomson Reuters, they would merely ask them to ‘correct’ or ‘adjust’ the impact factors of their journals to make them more accurate. Given that already Moed and van Leeuwen found that most such corrections seemed to increase the impact factor, it appears that these corrections only take place if a publisher considers their IF too low and only very rarely indeed if the IF may appear too high (and who would blame them?).

Entry into the Arcanum (or: on admissions committees and faculty searches)

There have been some interesting articles about graduate school admissions and faculty hiring committees lately.

First, Julie Posselt is publishing a book that takes a sociological look at grad schools admissions. She watched six different departments at three universities review graduate applicants. It is in turns expected and horrifying:

For instance, those whose programs were not at the very top of the rankings frequently talked about not wanting to offer a spot to someone they believed would go to a higher-ranked program. They didn’t want their department to be the graduate equivalent of what high school students applying to college term a safety school. In this sense many of these departments turned down superior candidates, some of whom might have enrolled. Many of the professors sound insecure about their programs even though they are among the very best.

… Committee members also seemed to generalize from the experience of past graduate students who failed, wanting to avoid anyone like them in the future. They spoke of “being spooked” by seeing such applicants. The admissions committee members generally assumed applicants were getting Ph.D.s for careers like theirs — faculty jobs at research universities. So they were looking for signs of research potential. And they were also unabashed elitists. “This is an elite university and a lot of the people at the university are elitists,” one professor said with a laugh. “So they make a lot of inferences about the quality of someone’s work and their ability based on where they come from.”

…The applicant, to a linguistics Ph.D. program, was a student at a small religious college unknown to some committee members but whose values were questioned by others. “Right-wing religious fundamentalists,” one committee member said of the college, while another said, to much laughter, that the college was “supported by the Koch brothers.” The committee then spent more time discussing details of the applicant’s GRE scores and background — high GRE scores, homeschooled — than it did with some other candidates. The chair of the committee said, “I would like to beat that college out of her,” and, to laughter from committee members asked, “You don’t think she’s a nutcase?” Other committee members defended her, but didn’t challenge the assumptions made by skeptics.

Next, Yevgenia Kozorovitskiy has an article about being a mother in science. It is all worth a read, but this part is germane to their (neuroscience at Northwestern University) faculty search:

On average, this group – both men and women – defended their PhDs a little before 2008. That means that now at the close of 2015, the bulk of our applicants have lingered in postdoctoral limbo for more than half a decade. A postdoc position used to be an optional step toward independence in my field of neuroscience. Eventually, a year or two of research experience after receiving a doctoral degree and before winding up in a faculty job became expected. But now, seeing strong candidates with less than five years of high profile post-PhD work is rare.

Finally, Cryptogenomicon discusses how their faculty search (at Harvard) has gone so far:

Apparently people self-select pretty strongly. Too strongly, in fact. I’m worried about how much self-selection is going on, in two different respects. One is that our applicant pool starts out biased: it’s only 21% female, and it’s only 5% underrepresented minority. The other is that it’s striking how many applicants are either at Harvard already, or have past Harvard training. I think both observations are telling us the same thing: that there are highly qualified people who don’t think they’ll be comfortable or welcome here.

First thing I look for is what the research question is. I write a one-line summary for myself, to force myself to extract the relevant info. If, after due effort, I can’t get it, then I’m worried about clarity and focus — either mine (if I’m tired and it’s time for a break), or the candidate’s. Second I’m looking at publication history. I’m looking for evidence of substantial, original, creative work, and a trajectory that I can understand that leads up to the research proposal. I also check the titles and abstracts of the 1-3 papers/manuscripts that are submitted in the application package, so make sure that what the candidate identifies as their major contributions agrees with what I got from the CV, and I’ll dip into those papers to see if I get the points. Third, I skim reference letters, where (hidden amongst the superlatives) I’m looking for evidence that the candidate developed original ideas. Fourth, I’m looking for other rigorous selections that the candidate has passed before – graduation with honors as an undergrad, competitive postdoc fellowships, substantive research awards. No one thing is disqualifying by itself, especially because I’m on the lookout for unconventional people with unique superpowers, who are going to open up totally new niches. What I’m not concerned with: journal titles, H-index, citation counts, or impact factors.(ed – not explicitly, at least)

Happy hunting

On the connectome

Via Twitter, MnkyMnd thinks this should be required reading for every scientist:

In that Empire, the Art of Cartography attained such Perfection that the map of a single Province occupied the entirety of a City, and the map of the Empire, the entirety of a Province. In time, those Unconscionable Maps no longer satisfied, and the Cartographers Guilds struck a Map of the Empire whose size was that of the Empire, and which coincided point for point with it. The following Generations, who were not so fond of the Study of Cartography as their Forebears had been, saw that that vast map was Useless, and not without some Pitilessness was it, that they delivered it up to the Inclemencies of Sun and Winters. In the Deserts of the West, still today, there are Tattered Ruins of that Map, inhabited by Animals and Beggars; in all the Land there is no other Relic of the Disciplines of Geography.

On Exactitude in Science (Borges, 1946)