#Cosyne17, by the numbers (updated)

cosyne2017_thisyear

Cosyne is the Computational and Systems Neuroscience conference held every year in Salt Lake City (though – hold the presses – it is moving to Denver in 2018). It’s status as the keystone Computational and Systems Neuro conference makes it a fairly good representation of what the direction of the field is. Since I like data, here is this year’s Cosyne data dump.

First is who is the most active – and this year it is Jonathan Pillow who I dub this year’s Hierarch of Cosyne. The most active in previous years are:

  • 2004: L. Abbott/M. Meister
  • 2005: A. Zador
  • 2006: P. Dayan
  • 2007: L. Paninski
  • 2008: L. Paninski
  • 2009: J. Victor
  • 2010: A. Zador
  • 2011: L. Paninski
  • 2012: E. Simoncelli
  • 2013: J. Pillow/L. Abbott/L. Paninski
  • 2014: W. Gerstner
  • 2015: C. Brody
  • 2016: X. Wang

cosyne2017_allposters

If you look at the total number of posters across all of Cosyne’s history, Liam Paninski is and probably always will be the leader. Evidently he was so prolific in the early years that they had to institute a new rule to nerf him like some overpowered video game character.

Visualizing the network diagram of co-authors also reveals a lot of structure in the conference (click for PDF):

cosyne2017_network

And the network for the whole conference’s history is a dense mess with a soft and chewy center dominated by – you guessed it – the Paninski Posse (I am clustered into Sejnowski and Friends from my years at Salk).

cosyne2004-2017_network

People on twitter have seemed pretty excited about this data, so I will update this later with a link to a github repository.

Speaking of twitter, it is substantially more active than it has been in the past. Neuroscience Twitter keeps growing and is a great place to learn about new ideas in the field. Here is a feed of everyone that is attending that is on Twitter. Let me know if you want me to add you.

There are two events you should consider attending if you are at Cosyne: the Simons Foundation is hosting a social on Friday evening and on Saturday night there is a Hyperbolic Cosyne Party which you should RSVP to right away…!

On a personal note, I am giving a poster on the first night (I-49) and am co-organizing a workshop on Automated Methods for High-Dimensional Analysis. I hope to see you all there!

Previous years: [2014, 2015, 2016]

 

Update – I analyzed a few more things based on new data…

cosyne-institutions-2017

I was curious which institution had the most abstracts (measured by the presenting author’s institution.) Then I realized I had last year’s data:

cosyne-institutions-2016

Somehow I had not fully realized NYU was so dominant at this conference.

I also looked at which words are most enriched in accepted Cosyne abstracts:acceptedwords

Ilana said that she sees: behavior. What is enriched in rejected abstracts? Oscillations, apparently (this is a big topic of conversation so far) 😦rejectedwords

Finally, I clustered the most common words that co-occur in abstracts. The clusters?

  1. Modeling/population/activity (purple)
  2. information/sensory/task/dynamics (orange)
  3. visual/cortex/stimuli/responses (puke green)
  4. network/function (bright blue)
  5. models/using/data (pine green)

abstract-word-co-occurrence

RIP Roger Tsien, 1952-2016

2016_08_31_tsien_roger

Sad news today – Roger Tsien passed away one week ago.

Can anyone imagine biology today without GFP? And though he is best known for that – he did share the Nobel prize for GFP, after all – Roger Tsien did much more. One of my favorite stories about Roger (and he was a character; you couldn’t be at UCSD without one or two Roger stories) came from a talk he gave while I was there. He was asked to step in at a somewhat late moment to give a series of three one-hour lectures on his life’s work. It was one of the best talks I have attended, clear and insightful and packed with interesting anecdotes.

He begun describing his struggles as a graduate student, how he had a hard time doing electrophysiology. His solution? He created BAPTA, the calcium chelator that is the basis for neural imaging. Yeah: electrophysiology too hard? Just create the basis for calcium imaging. Much easier. (And he was quite honest; it was much easier for him.)

You should of course read that original BAPTA paper (this is under-appreciated!). Then of course there is his GFP paper, establishing a point mutation with much improved characteristics. Here is his Nobel lecture. I don’t think it’s an understatement to say that these are just the tip of the iceberg.

Unrelated to all that, 8/20 edition

Will somebody please give Norm Macdonald another TV show? And why it’s partly his own fault. If you don’t like the moth joke, I don’t understand you.

The Irritating Gentleman (1874). By Berthold Woltze

TheIrritatingGentleman

The Heiman Lab turned their website into a text adventure. This is seriously amazing, though I’m a little upset it won’t let me eat the petri dish.

You will be able to order a self-driving Uber in Pittsburgh this month. Though obviously there will be someone sitting in the driver’s seat for safety reasons. Maybe the ride is free if you get in an accident?

Someone recorded their heart rate during a conference talk. Fear is the mind killer:heartrateduringtalk

When are babies born?

whenarebabiesborn

The Broad has spent $10 million this year and $5 million last year on the CRISPR dispute. Now we hear that a grad student claims he was working on CRISPR in the Zhang lab but the lab only really jumped on the work after seeing Doudna’s in vitro paper. The Broad says he was lying to get a job with Doudna and stay in the country.

Is it story that makes us read? Maybe not, but this essay has one of the worst defenses of its theses that I’ve seen in a while.

If you are a neuroscientist and want to see educated laypeople try to puzzle out  neuroscience by logic and anecdotes (unsuccessfully), try reading this post and its comment thread.

Unrelated to all that, 8/13 edition

UKIP (and brexit?) is driven by perception of immigration, not actual immigration.

What is the best classifier in machine learning? One paper suggested that we should all just go to random forests straight away. But then, maybe not. So… do whatever you were doing before? Anyway, thanks to this I learned about XGBoost.

When you eat a dried fig, you’re probably chewing fig-wasp mummies, too. Learn everything you can about figs and wasps and fig wasps!

Do animals fight wars and if so what was the largest war? This is the story of Argentine ant and its continent-spanning supercolonies.

The NIH postdoc salaries are out for the year and you can see the Obama pay raise! …if you are a 0- or 1-year postdoc, that is.

Here is the Standard Model of physics in one equation (click through for an explanation):

sml

Are markets efficient? A good discussion between Fama and Thaler. Unsurprisingly, a lot of it comes down to semantics.

Did the human brain evolve to speak multiple languages? I think the historical evidence is a resounding yes.

A ‘brief’ history of neural networks. I liked this from part four:

So, why indeed, did purely supervised learning with backpropagation not work well in the past? Geoffrey Hinton summarized the findings up to today in these four points:

Our labeled datasets were thousands of times too small.
Our computers were millions of times too slow.
We initialized the weights in a stupid way.
We used the wrong type of non-linearity.

Unrelated to all that, 8/6 edition

Ten simple rules for writing a postdoctoral fellowship and Ten simple rules for curating and facilitating small workshops

The super-recognizers of Scotland Yard:

In 2007, Neville had set up a unit to collate and circulate images of unidentified criminals captured on CCTV. Officers were asked to check the Met’s “Caught on Camera” notices to see if they knew any of the suspects. “It became apparent that some officers were much better than others,” Neville told me. “For example, if I received 100 names, some officers would have submitted ten or 15, while in the main they were one-off identifications.”…Met officers trawled through tens of thousands of hours of CCTV footage, ­identifying 609 suspects responsible for looting, arson and other criminal acts. One officer, PC Gary Collins, made 180 identifications, including that of one of the most high-profile suspects…

See also this paper on super-recognizers

National Geographic Travel Photographer of the Year 2016

Even though there were a lot of people in Ben Youssef, still here was more quiet and relaxing compare to the street outside in Marrakesh. I was waiting for the perfect timing to photograph for long time.

Even though there were a lot of people in Ben Youssef, still here was more quiet and relaxing compare to the street outside in Marrakesh. I was waiting for the perfect timing to photograph for long time.

I found this open on my mobile browser window one day and I don’t remember why, but am thankful to Past Me:

CpBc8srWEAAQZqQ

“I have a great idea! Let’s get neural networks to build faces!”

ANN faces

OTOH they can infer 3D images of cars pretty well (this is a Big Deal)

What are some of the best algorithms of the 20th century?

We’ve been understanding the Faraday cage all wrong:

So I started looking in books and talking to people and sending emails. In the books, nothing! Well, a few of them mention the Faraday cage, but rarely with equations. And from experts in mathematics, physics, and electrical engineering, I got oddly assorted explanations. They said the skin depth effect was crucial, or this was an application of the theory of waveguides, or the key point was Babinet’s principle, or it was Floquet theory, or “the losses in the wires will drive everything…” And then at lunch one day, colleague n+1n+1 told me, it’s in the Feynman Lectures [2]! And sure enough, Feynman gives an argument that appears to confirm the exponential intuition exactly. [The problem is, it is wrong.]

Everything you wanted to know about hair

Every number tells a story

CSHL Vision Course

cshlfirstday

I have just returned from two weeks in Cold Spring Harbor at the Computational Neuroscience: Vision course. I was not entirely sure what to expect. Maybe two weeks of your standard lectures? No – this was two weeks of intense scientific discussion punctuated with the occasional nerf fight (sometimes during lectures, sometimes not), beach bonfire, or table tennis match.

It is not just the material that was great but the people. Every day brought in a fresh rotation of scientists ready to spend a couple of days discussing their work – and the work of the field – and to just hang out. I learned as much or more at the dinner table as I did in the seminar room. But it wasn’t just the senior scientists that were exhilarating but the other students. It is a bit intimidating seeing how much talent exists in the field… And how great they are as people.

I also found that the graduate students at CSHL have the benefit of attending these courses (for free). It was great to meet people from all of the labs and hear about the cool stuff going on. Of course, they live pretty well, too. Here is the back patio of my friend’s house:

cshlpdhouse

I think I could get used to that?

Anyway, this is all a long-winded way of saying: if you get the chance, attend one of these courses! And being there motivated me to start making more of an effort to update the blog again. I swear, I swear…

cshllastday

I have seen things you wouldn’t believe (in my mind)

When two different people perceive blue, is it the same to both of them? When two people imagine it, is it the same? Can everyone even imagine it?

If I tell you to imagine a beach, you can picture the golden sand and turquoise waves. If I ask for a red triangle, your mind gets to drawing. And mom’s face? Of course.

You experience this differently, sure. Some of you see a photorealistic beach, others a shadowy cartoon. Some of you can make it up, others only “see” a beach they’ve visited. Some of you have to work harder to paint the canvas. Some of you can’t hang onto the canvas for long. But nearly all of you have a canvas.

I don’t. I have never visualized anything in my entire life. I can’t “see” my father’s face or a bouncing blue ball, my childhood bedroom or the run I went on ten minutes ago. I thought “counting sheep” was a metaphor. I’m 30 years old and I never knew a human could do any of this. And it is blowing my goddamned mind…

I opened my Facebook chat list and hunted green dots like Pac-Man. Any friend who happened to be online received what must’ve sounded like a hideous pick-up line at 2 o’clock in the morning:
—If I ask you to imagine a beach, how would you describe what happens in your mind?
—Uhh, I imagine a beach. What?
—Like, the idea of a beach. Right?
—Well, there are waves, sand. Umbrellas. It’s a relaxing picture. You okay?
—But it’s not actually a picture? There’s no visual component?
—Yes there is, in my mind. What the hell are you talking about?
—Is it in color?
—Yes…..
—How often do your thoughts have a visual element?
—A thousand times a day?
—Oh my God.

And so on. Read the whole thing, and this as well. How common is something like this? Judging by internet comments – very common, especially for other sensory modalities. I can visualize fine, though my imaginary ‘sense of place’ is probably stronger, but I cannot ‘imagine’ a taste or smell to save my life. I once went into a fancy cocktail bar and was asking the owner how he came up with the cocktails. He just thought about how two ingredients could taste together, he said, and then he combined them like that. Whoa, whoa, whoa, I said, you can imagine tastes? And combine them in your mind?

Who knows what others imagine in their mind? Does imagining a picture mean the same thing to different people? Is it vivid or faded, cartoonish or realistic?

When we do experiments with animals – how much are we relying on this supposed universality which, even among humans, is anything but?

Friday Fun: Science Combat

e52a7734335649.56cdf6d2294a8

Someone has combined pixel art, Mortal Kombat, and famous scientists. This is actually going to be a real video game released for Superinteressante magazine at some point. Until then, gaze in awe at the mighty combatants.

#Cosyne2016, by the numbers

mostCosyne2016

Cosyne is the systems and computational neuroscience conference held every year in Salt Lake City and Snow Bird. It is a pretty good representation of the direction the community is heading…though given the falling acceptance rate you have to wonder how true that will stay especially for those on the ‘fringe’. But 2016 is in the air so it is time to update the Cosyne statistics.

I’m always curious about who is most active in any given year and this year it is Xiao-Jing Wang who I dub this year’s Hierarch of Cosyne. I always think of his work on decision-making and the speed-accuracy tradeoff. He has used some very nice modeling of small circuits to show how these tasks could be implemented in nervous systems. Glancing over his posters, though, and his work this year looks a bit more varied.

Still, it is nice to see such a large clump of people at the top: the distribution of posters is much flatter this year than previously which suggests a bit of

Here are the previous ‘leaders’:

  • 2004: L. Abbott/M. Meister
  • 2005: A. Zador
  • 2006: P. Dayan
  • 2007: L. Paninski
  • 2008: L. Paninski
  • 2009: J. Victor
  • 2010: A. Zador
  • 2011: L. Paninski
  • 2012: E. Simoncelli
  • 2013: J. Pillow/L. Abbott/L. Paninski
  • 2014: W. Gerstner
  • 2015: C. Brody
  • 2016: X. Wang

mostCosyneALL

If you look at the total number across all years, well, Liam Paninski is still massacring everyone else. At this rate, even if Pope Paninski doesn’t submit any abstracts over the next few years and anyone submits six per year… well it will be a good half a decade before he could possibly be dethroned.

The network diagram of co-authors is interesting, as usual. Here is the network diagram for 2016 (click for PDF):

cosyne2016

And the mess that is all-time Cosyne:

cosyneALL-straight

 

I was curious about this network. How connected is it? What is its dimensionality? If you look at the eigenvalues of the adjacency matrix, you get:

eigenvals

I put the first two eigenvectors at the bottom of this post, but suffice it to say the first eigenvector is basically Pouget vs. Latham! And the second is Pillow vs Paninski! So of course, I had to plot a few people in Pouget-Pillowspace:

pillowspace

(What does this tell us? God knows, but I find it kind of funny. Pillowspace.)

Finally, I took a bunch of abstracts and fed them through a Markov model to generate some prototypical Cosyne sentences. Here are abstracts that you can submit for next year:

  • Based on gap in the solution with tighter synchrony manifested both a dark noise [and] much more abstract grammatical rules.
  • Tuning curves should not be crucial for an approximate Bayesian inference which would shift in sensory information about alternatives
  • However that information about 1 dimensional latent state would voluntarily switch to odor input pathways.
  • We used in the inter vibrissa evoked responses to obtain time frequency use of power law in sensory perception such manageable pieces have been argued to simultaneously [shift] acoustic patterns to food reward to significantly shifted responses
  • We obtained a computational capacity that is purely visual that the visual information may allow ganglion cells [to] use inhibitory coupling as NMDA receptors, pg iii, Dynamical State University
  • Here we find that the drifting gratings represent the performance of the movement.
  • For example, competing perceptions thereby preserve the interactions between network modalities.
  • This modeling framework of goal changes uses [the] gamma distribution.
  • Computation and spontaneous activity at the other stimulus saliency is innocuous and their target location in cortex encodes the initiation.
  • It is known as the presentation of the forepaw target reaction times is described with low dimensional systems theory Laura Busse Andrea Benucci Matteo Carandini Smith-Kettlewell Eye Research.

Note: sorry about the small font size. This is normally a pet peeve of mine. I need to get access to Illustrator to fix it and will do so later…

The first two eigenvectors:

ev1 ev2

Doudnagate

“They claimed merely to be scientists discovering facts; [I] doggedly argued that they were writers and readers in the business of being convinced and convincing others.”

“By being sufficiently convincing, people will stop raising objections altogether, and the statement will move toward a fact-like status. Instead of being a figment of one’s imagination (subjective), it will become a “real objective thing,” the existence of which is beyond doubt.”

Latour, Laboratory Life

When reading any review article in science, the most interesting thing is not what is said, but what the reviewer has chosen to say. Scientific knowledge is both vast and deep and every author surveying this territory must pick and choose from among it. More importantly, what the author chooses to say broadcasts both what and how they are thinking about the topic. Like a philosophy, this can give a careful reader a new way to think about, say, retinal ganglion cells, or information maximization, or, I don’t know, the history of CRISPR.

So: if you are Eric Lander, eminent biologist, who would you go about shaping your history of CRISPR? Especially given that it is in the midst of a vicious patent battle between Berkeley and the Broad Institute (that you happen to head), between Doudna and Zhang (your colleague)?

Apparently you would do it in a way that would piss a lot of people off.

Furthermore, Jennifer Doudna of the University of California, Berkeley—who, along with Emmanuelle Charpentier of the Helmholtz Centre for Infection Research in Germany, is currently locked in the patent dispute with the Broad’s Feng Zhang and colleagues—called Lander’s account “factually incorrect” in a January 17 PubMed Commons comment. Doudna wrote that Lander’s description of her lab “and our interactions with other investigators . . . was not checked by the author and was not agreed to by me prior to publication.”…

One of those scientists was George Church, who has appointments at Harvard and the Broad and has collaborated with Zhang and others on CRISPR research. “Eric [Lander] asked me some very specific questions on 14-Dec and I offered to fact check (as I generally do),” Church wrote in an email to The Scientist. “He sent me a preprint on 13-Jan (just hours before it came out in Cell).  I immediately sent him a list of factual errors, none of which have been corrected.”

Everything that I would say about this has already been said in voluminous form, mostly by Lior Pachtor:

All of the research papers referenced in the Lander perspective have multiple authors, on average about 7, and going up to 20. When discussing papers such as this, it is therefore customary to refer to “the PI and colleagues…” in lieu of crediting all individual authors. Indeed, this phrase appears throughout Lander’s perspective, where he writes “Moineau and colleagues..”, “Siksnys and colleagues..”, etc. It is understood that this means that the PIs guided projects and provided key insights, yet left most of the work to the first author(s) who were in turn aided by others…

But in choosing to highlight a “dozen or so” scientists, almost all of whom are established PIs at this point, Lander unfairly trivializes the contributions of dozens of graduate students and postdocs who may have made many of the discoveries he discusses, may have developed the key insights, and almost certainly did most of the work. For example, one of the papers mentioned in the Lander perspective is

and also by Dominic Berry:

My historical muscle reflex was provoked by a case detailed in Graeme Gooday and Stathis Arapostathis’Patently Contestable (2013) from the history of wireless telegraphy. After key patents were awarded to one Guglielmo Marconi at the end of the nineteenth century, a range of different histories of wireless telegraphy began to emerge, ones that more or less stressed the greater importance of other individual scientists, or the international collective. The ‘Arch-builders of Wireless Telegraphy’ as John Joseph Fahie called them in one of the earliest of these histories (1899), are brought together most evocatively in this visual representation from his book on the subject. Fahie’s intention, so Gooday and Arapostathis argue, was to decenter Marconi from this history, making his patent claims look less legitimate or at least less worthy.

And now on to the fun stuff: dirty dirty gossip.

Read this storify of Michael Eisen’s rant on the review.

Note that Jennifer Doudna had to comment on Pubmed because Cell didn’t approve her comment.

Note also that the Doudna lab retweeted this snarky tweet amidst other more scientifically-minded ones:

CZDvcvsUsAAcsuI

There were two recent popular press articles, one profiling Doudna and the other Zhang (with their own set of problems assigning credit). Turns out the one profiling Doudna was written by a Berkeley employee and didn’t have a COI statement (the backstory here is that this is being seen as going from Doudna vs. Zhang to a broader Berkeley vs. Broad Institute).

Go read the PubPeer comment thread which has fun facts like:

He also omits the fact that Doudna had already published 10 papers on CRISPR before her paper with Charpentier. “She had been using crystallography and cryo-EM to solve structures….” – no mention of any of the many insights from her work, in striking contrast to the detailed accounting of virtually all other investigators. The intended impression seems to be that she was a minor player, putting out a couple of technical observations, before her chance meeting with Charpentier.

Paul Knoepfler asked whether people thought Lander gave enough credit to Doudna and the twitterverse says no.

CY9cFktUMAA6Tc8

I feel like I’m missing something. Anything else? I do love gossip.

More seriously, this is your chance to see how history is actually made.