#CCN2019 by the numbers

I am in Berlin for the Cognitive Computational Neuroscience (CCN) conference. It is an interesting collection of people working on more human (though some animal) cognitive neuroscience, often using neural network models. In its third year, CCN is an interesting contrast to Cosyne, a conference more focused on traditional systems neuroscience along with computational modeling.

While I’m here, I thought I would do a quick analysis along the lines of what I have done in years past for Cosyne. I only have one year’s worth of data so there is a limit on what I can analyze but I wanted to know – who was here?

The most posters (abstracts) were from the lab of Simon Kelly. There is not a lot of overlap here with Cosyne with the exception of Tim Behrens – a bountiful contributor to the Cosyne 2019 conference as well – and perhaps Wei Ji Ma? So it seems the communities are at least somewhat segregated for now.

There is also the co-citation network. Who was on posters with whom? That is above (click through for high-resolution PDF). There are ~222 connected components (distinct subgraphs, visualized at the top of the page) and the largest connected component is relatively small. It will be interesting to see if this changes as the community coheres over the next few years.

That’s it for this year! Next year I will try to take data from the (short) life history of the conference.

#Cosyne19, by the numbers

As some of you might know, there’s been a lot of tumult surrounding this year’s Cosyne (Computational and Systems Neuroscience) conference. The number of submissions skyrocketed from the year before and the rejection rate went from something like 40% to something like 60% – there were over 1000 abstracts submitted! Even crazier, there is a waitlist to even register for the conference. So what has changed?

Lisbon, Lisbon, Lisbon. This is the first year that the conference has been in Europe and a trip to Portugal in the middle of winter is pretty appealing. On the other hand, maybe Cosyne is going the way of NeurIPS and becoming data science central? Let’s see what’s been going on.

You can see from the above that the list of most active PIs at Cosyne should look pretty recognizable.

First is who is the most active – and this year it is Jonathan Pillow who I dub this year’s Hierarch of Cosyne. The most active in previous years are:

  • 2004: L. Abbott/M. Meister
  • 2005: A. Zador
  • 2006: P. Dayan
  • 2007: L. Paninski
  • 2008: L. Paninski
  • 2009: J. Victor
  • 2010: A. Zador
  • 2011: L. Paninski
  • 2012: E. Simoncelli
  • 2013: J. Pillow/L. Abbott/L. Paninski
  • 2014: W. Gerstner
  • 2015: C. Brody
  • 2016: X. Wang
  • 2017: J. Pillow
  • 2018: K. Harris
  • 2019: J. Pillow

If you look at the most across all of Cosyne’s history, you can see things shift and, remarkably, someone is within striking distance of taking over Liam Paninski’s top spot (I full expect him to somehow submit 1000 posters next year despite there being a rule specifically designed to limit how much he can submit!).

It is interesting to look at the dynamics through time – I have plotted the cumulative posters by year below and labeled a few people. It looks like you can see when the Paninski rule was implemented (2008 or 2009) and when certain people became PIs (Surya Ganguli became suspiciously more productive in 2012).

Adam Charles suggested that we should looked at viral – if a person’s ideas were a disease, who would spread their ideas (diseases) most effectively? Working from a measure defined here, he calculated the most viral people at Cosyne19:

And also if you normalize for the number of nodes the people are directly connected to:

And similarly for Cosyne 2004 – 2019:

In other words, are you viral because you are linked to a lot of people who are in turn are linked to a lot of people (top figure)? Or are you viral because you are connected to a broad collection of semi-viral co-authors (bottom figure)?

It’s been remarked that the lists above are pretty male-heavy. I thought that maybe the non-PIs would be more diverse? So I plotted the number of posters from 2013 – 2019 (mislabeled below) where I have author ordering: how many posters does each person have that are non last or second-to-last authors, given that are not on the PI list above? The list below is, uh, not any better at representation.

 

What is it that got accepted in Cosyne19? These are the most common words in the abstracts:

These are the words that are more popular in 2019 than in the 2018 abstracts:

Conversely, these are the words that are less popular this year than previous years. Sorry, dopamine.

I had thought that maybe the increased popularity at Cosyne was because of an increase in participation from NeurIPS refugees. If so, it doesn’t show up in the list of words above. I tried various forms of topic modeling to try to parse out the abstracts. I’ve never found a way of clustering the abstracts that I find satisfying – the labels I get out never correspond to my intuition for how the subfields should be partitioned – but here is an embedding using doc2vec of all the abstracts from 2017 – 2019:

And here is an embedding in the same space but only for 2019 abstracts. Not so different!

And if we look at the number of abstracts that contain word relating to different model organisms – or just “modeling”, “models”, “simulations”, etc, we see it’s stayed pretty much the same year-to-year.

Maybe it is a different group of people who are at the conference? Visualizing the network diagram of co-authorships reveals some of the structure in the computational neuroscience community (click image for zoomable PDF):

Some highlights from this:

IDK WTF is going on at the Allen Institute but I like it:

Geography is pretty meaningful. The Northeast is more clustered than you would expect from chance:

As are the Palo Alto Pals

Here is a clustering of everyone who has been to Cosyne since 2004 and has at least five co-authors. It’s a mess! (click image for zoomable PDF)

Okay this grouping looks pretty similar. Are they the same people? If I look at the proportion of last authors on each abstract who have never been to Cosyne before, it looks like the normal level of inflow – no big new groups of people.

But the number of authors on each abstract has grown pretty heavily:

One thing that is changing is the proportion of authors who belong to the largest subgraphs of the network – that is, who is connected to the “in-group” of Cosyne. And the in-group is larger than ever before:

It’s a bit harder to see here – partly because there are two large subgraphs this year instead of one big glob – but mean path length (how long it takes to get from one author to another) and the network efficiency (a similar metric that is more robust to size) all indicate a more dispersed set of central clusters. I’m not quite sure why, but it is possible that the central group is replicating itself. You are getting the same people still weakly connected to former PIs/collaborators opening their own labs, getting a little further away but not too far…

All in all, it looks like there was an increase in submissions – probably because of the European/Lisbonian location – but no real change in the submissions that were accepted.

Interesting neuro/ML discussions on twitter, 1/9/19

It seems like it might be useful to catalogue the interesting twitter threads that pop up from time to time. They can be hard to parse and easy to miss but there is a lot of interesting and useful stuff. I am going to focus on *scientific result*-related threads. I don’t know if this will be useful – consider it an experiment. Click on the tweets to read more of the threads.

(Click below the fold) Continue reading

How to tweet about your science #sciencestorm #bitesizescience

Everyone should tweet about their science. Not only will other scientists on Twitter see it, but plenty of other scientists who are not active on Twitter – but pay attention to it! – will see it as well. But the way that you write your tweet will make a huge difference in the amount of attention it gets. No matter how interesting your science is, no matter how finely crafted your paper is, if a tweet isn’t written well it won’t diffuse through Twitter very well.

I’m also going to see if I can start a hashtag: something like #sciencestorm or #bitsizescience added to the first tweet that describes science. I love reading these stories and I wish they were easier to find. Adding a hashtag lets people quickly search for them and find them.

Here are a few tips in no particular order:

  1. Don’t just tweet the title of your paper and add a link. That’s a good first start – but what you really want is a series of tweets that slowly explains what you found. Think about this as your chance to provide an accessible narrative about what you found and what you think is most interesting.
  2. Be excited about your research! People will be excited for you. It’s infectious seeing how happy and excited people are when their papers are published. They want to be supportive and congratulate you.
  3. People want to learn something. If you can condense the messages of your paper into short facts, it will get more traction.
  4. Always, always, always include an image. It almost doesn’t matter what the image is – just being there will add a huge uptick in people paying attention to it. But the best image lets a person look at it and understand the paper in a single shot. It can be a figure from your paper, it can be a schematic, it can be a few figures. People want to learn something.
  5. If you can, add a video. People love videos even more than they love images! Doing optogenetics? Show a video of a light going on and your animal immediately changing their behavior – people love that shit. Doing cell biology? Show a video of a cell moving or changing or something.
  6. This is stupid, but the time that you tweet matters a bit. Be aware that fewer people are paying attention at 2AM PST than, say, 9AM PST. Think about who – across the world – is awake when you are tweeting.

Let’s go through three examples (which I have been trying to collect here).

The first is a series of tweets (“tweetstorm”) by Carsen Stringer describing her work looking at the fractal dimension of neural activity. Now just typing those words I’m thinking “ugh this sounds so complicated” – and I have a masters in math! But that’s not how she described it. She slowly builds up the story starting with the central question, providing examples, explaining concepts. Even if you have no clue what fractal dimensionality means you will learn a lot about the work and get excited by the paper. In a way that a single tweet would not.

She also makes sure to use explanatory pictures well. Even in the absence of explanation, the simple act of having a picture drives people to engage with the tweet. Look at these examples side-by-side:

Which of the two above looks more interesting? The plain boring text? Or the text with some friendly fox faces? Pictures make a bigger difference than you’d think (which is not to say that every tweet needs a picture – but they help, a lot).

Another example is this from Michael Eisen. This is in a slightly different style that starts off describing the historical background:

What the tweetstorm also provides is insight into how they made their discovery. You get to feel like you are being carried along their scientific process!

The final example got me right away. I saw this tweet and I couldn’t help but smile. Dom Cram is studying meerkats so he made some meerkat legos. I didn’t even know if I cared about the study but I definitely cared about looking at more lego meerkats (and then I realized I thought the study was interesting)…

If you enjoy this kind of thing, get creative! It’s fun and people want to have fun and learn about your science at the same time.

 

Monday Open Question: what did you need to do to get a neuroscience job in 2018?

See last year’s post. As always, if you are a postdoc looking for a faculty job I maintain the neurorumblr with crowdsourced information on open jobs + helpful information. You should also add yourself to The List which lets faculty search committees contact you. If you are on a search committee, feel free to email me (with some evidence that you are a faculty member on a search committee…) to access The List.

First, I put up a poll on the neurorumblr twitter account to see what conceptions people had about being hired in neuroscience. I’m going to give answers a bit out of order. First, what percent of faculty job applicants do you think are male?

The median was 50-60% (roughly even) with a heavy tail toward 60-70%+. On the List last year, there were 65 postdocs of which 28 were female and 37 were male – so ~57% of the applicants (in my dataset) were male last year, a little lower than the year before (62%).

How about faculty HIRES? The estimate was about the same, but weighted more toward the hire end (60%+). Last year, 57% of the people on my hired list were male. This year it is a tick larger at 61.6%.

One of the big surprises in last year’s analysis (to naive me) was how geographically clustered that got hired were.

I expanded the analysis this year to look at where faculty hires got their PhDs from. This is much more geographically dispersed compared to postdocs. In a subset of 69 faculty hires where I was somewhat confident of PhD-granting institution, only four institutions had more than two people hired – with UC San Diego (go Tritons!) and UPenn having the most at four. I’m guessing a lot of this is statistical noise and will fluctuate widely from year to year, but that will take more annual data.

How about postdoc institutions? Again, it shows an absurd NYC+ area dominance (institutions located in and around NYC, NYC+ category below includes people in NYC category).

Alright, now on to specifics. Which model organisms did hires use? There were (roughly):

(23) Mouse
(15) Human
(12) Rat
(4) Monkey
(2) Drosophila

In terms of publications, successful faculty hires had a mean H-index of 11.1 (standard deviation ~ 4.4, statistically no different from the previous year).

How many had Cell/Nature/Science papers? 22.6% (14/62) had a first- or second-author CNS paper, 53.22% (33/62) had a first- or second- author Nature Neuroscience/Neuron/Nature Methods paper, and 64.5% (40/62) had at least one of either of these.

If you look at the IF through time, you can see a bump in the year prior to applying, suggesting that applicants get that “big paper” out just before they apply.

There’s a broader theory that I’ve heard from several people (outlined here) that the underlying requirement is really the cumulative impact factor. I have used the metric described in the link, where the approximate impact factor is taken from first-author publications and second-author publications are discounted 75% (reviews are ignored). Here are the CIFs over the past 8 years:

Note that postdocs from NY+ area institutions do not have significantly different H-index (10.42), cumulative IF (36.6), or CNS (3/19), NN/Neuron/NM (7/19), or both (9/19).

What about comparisons by gender? There appears to be a slight cumulative IF difference that is largely driven by differences in grad school publications – but I would need more data before I say something definitive here.

 

People seem to think that there are tons of internal hires out there but the data definitely does not bare that out. I found only 3/62 instances of someone who was hired by the university they either got their PhD from or did their postdoc at. However, there is some local geography at stake: I looked at all the hires made by NY+ area institutions and of these hires (6) every one either came from another NY+ area institution or a Philadelphia institution.

NeuroRumblr, 2018 – 2019

Quick announcement –

I’ve refreshed the NeuroRumblr for the 2018 – 2019 job season. If you are a postdoc looking for an academic job, add yourself to The List so that search committees can reach out to you. Note that I refresh the list yearly, so if you have added yourself in the past you should fill out the form again for the new season. If you are on a faculty committee, please feel free to email me at neurorumblr@gmail.com to gain access to The List (both this year and last year’s). Every year, I have gotten requests for access from every kind of institution across the world. As an aside, if you have been on a search committee that has used the list in the past and have ideas on how to make it more useful, or just have other thoughts, I’d be curious to hear them.

There is a page for labs that are looking for postdocs.

There is a page for labs that are looking for research staff.

There is a page to keep track of neuroscience conferences.

There is a page with collections of advice on being an academic and looking for academic jobs.

There is a twitter account (@neurorumblr) that I occasionally use to make announcements. The account will now automatically tweet, multiple times a day, about jobs that were put on the rumblr the previous day, as well as with upcoming job or conference deadlines. If you are a PI who placed an ad under postdocs or research staff, you can now add your twitter handle and it will tag you when it tweets. If you tweet and tag @neurorumblr, I will usually retweet it – more free advertisement! The twitter account gets a lot of attention and I keep hearing from people who have looked for jobs that they paid close attention to it.

Another reminder that I am looking to identify neuroscientists hired as tenure-track faculty over the previous year. I already have a lot of people on the list! But I know that’s not everyone.

Happy hunting.

Please help me identify neuroscientists hired as tenure-track assistant profs in the 2017-18 faculty job season

Last year, I tried to crowd-source a complete list of everyone who got hired into a neuroscience faculty job over the previous year. I think the list has almost everyone who was hired in the US… let’s see if we can do better this year?

I posted an analysis of some of the results here – one of the key “surprises” was that no, you don’t actually need a Cell/Nature/Science paper to get a faculty job.

If you know who was hired to fill one or more of the listed N. American assistant professor positions in neuroscience or an allied field, please email me with this information (neurorumblr@gmail.com).

To quote the requirements (stolen from Dynamic Ecology):

I only want information that’s been made publicly available, for instance via an official announcement on a departmental website, or by someone tweeting something like “I’ve accepted a TT job at Some College, I start Aug. 1!” If you want to pass on the information that you yourself have been hired into a faculty position, that’s fine too. All you’re doing is saving me from googling publicly-available information myself to figure out who was hired for which positions. Please do not contact me to pass on confidential information, in particular confidential information about hiring that has not yet been totally finalized.

Please do not contact me with nth-hand “information” you heard through the grapevine. Not even if you’re confident it’s reliable.

I’m interested in positions at all institutions of higher education, not just research universities. Even if the position is a pure teaching position with no research duties.

You should be using the Neuromethods slack

Ben Saunders has started a Slack for those of you in neuroscience who do, uh, neuroscience. The Neuromethods Slack is a place for scientists to discuss questions about experiments. There’s a channel for electrophysiology, a channel for the biophysics of rhodopsins, a channel for Drosophologists, a channel for data visualization, and so on. It is not the robust mix of science and nonsense that Twitter seems to generate but very much on-topic and seems to be giving answers to questions by other experts within a day or so. You should check it out!

Please help me identify neuroscientists hired as tenure-track assistant profs in the 2016-17 faculty job season

Over at Dynamic Ecology, Jeremy Fox asked whether people could help identify recently-hired tenure-track professors in Ecology. When he did this last year, he found that 51% of North American assistant professors that were hired were women. I asked on twitter whether this would be worth doing for neuroscience and everyone seemed in favor so here goes –

If you know who was hired to fill one or more of the listed N. American assistant professor positions in neuroscience or an allied field, please email me with this information (neurorumblr@gmail.com).

I’m just going to quote him on the requirements:

I only want information that’s been made publicly available, for instance via an official announcement on a departmental website, or by someone tweeting something like “I’ve accepted a TT job at Some College, I start Aug. 1!” If you want to pass on the information that you yourself have been hired into a faculty position, that’s fine too. All you’re doing is saving me from googling publicly-available information myself to figure out who was hired for which positions. Please do not contact me to pass on confidential information, in particular confidential information about hiring that has not yet been totally finalized.

Please do not contact me with nth-hand “information” you heard through the grapevine. Not even if you’re confident it’s reliable.

I’m interested in positions at all institutions of higher education, not just research universities. Even if the position is a pure teaching position with no research duties.

 

Who cares about science?

It’s easy to say something like “you can’t put a dollar amount on the value of science” except you can, quite easily. Governments do it all the time! So how much does the US government value science? Look above and you can easily see that, adjusting for inflation, the US government cares less about science than at any time in the last twenty years. But over those twenty years, the population has grown by 20%.

Another way we could ask how much the US government values science is to look at how hard it is for a scientist to even be funded. If we look at how hard it is for a scientist to get funded to do research, you can see how devastating the cuts in funding are: the success rate has gone from 30.5% to 18.3% over twenty years. And that’s on average. How hard is it for young scientists?

The funding rate for an under-36 scientist is 3%. 3%!

I keep getting told to not worry, science is a bipartisan issue. No one wants to implement Donald Trumps’ total devastation of the science budget. But if the support is so bipartisan, why do I not feel comforted? Why has investment in science decreased no matter who has been in power? Remember these numbers when the budget is passed on Friday; that is how much the government supports you.

And all that is without getting into the even more direct attacks on science that Trump and people like the chairman of the science committee, Lamar Smith.