Two views of science

The pessimist:

These quotes give you a sense of these two books, both of which build on what Alan Richardson calls “one of the great lessons of the cognitive revolution”: “just how much of mental life remains closed to introspection.” As a brief summation, the unified thesis of Nørretranders’s and Wilson’s works looks something like this: We are not really in control. Not only are we not in control, but we are not even aware of the things of which we are not in control. Our ability to judge anything with any accuracy is a lie, as is our ability to perceive these lies as lies. Consciousness masquerades as awareness and agency, but the sense of self it conjures is an illusion. We are stranded in the great opaque secret of our biology, and what we call subjectivity is a powerless epiphenomenon, sort of like a helpless rider on the back of a galloping horse—the view is great, but pulling on the reins does nothing.

If this description of reality feels familiar to you, it’s because such a neuroscientifically inspired pessimism is a quiet but powerful strain of modern thinking.

The optimists:

The beauty of a living thing is not the atoms that go into it, but the way those atoms are put together (Carl Sagan)

Poets say science takes away from the beauty of the stars – mere globs of gas atoms. I too can see the stars on a desert night, and feel them. But do I see less or more? The vastness of the heavens stretches my imagination – stuck on this carousel my little eye can catch one – million – year – old light. A vast pattern – of which I am a part… What is the pattern, or the meaning, or the why? It does not do harm to the mystery to know a little about it. For far more marvelous is the truth than any artists of the past imagined it. Why do the poets of the present not speak of it? What men are poets who can speak of Jupiter if he were a man, but if he is an immense spinning sphere of methane and ammonia must be silent? (Richard Feynman)

These are not necessarily mutually exclusive.

But I also found this Feynman poem, which I had never heard before:

…I stand at the seashore, alone, and start to think.

There are the rushing waves, mountains of molecules
Each stupidly minding its own business
Trillions apart, yet forming white surf in unison

Ages on ages, before any eyes could see
Year after year, thunderously pounding the shore as now
For whom, for what?
On a dead planet, with no life to entertain

Never at rest, tortured by energy
Wasted prodigiously by the sun, poured into space
A mite makes the sea roar

Deep in the sea, all molecules repeat the patterns
Of one another till complex new ones are formed
They make others like themselves
And a new dance starts

Growing in size and complexity
Living things, masses of atoms, DNA, protein
Dancing a pattern ever more intricate

Out of the cradle onto the dry land
Here it is standing
Atoms with consciousness, matter with curiosity
Stands at the sea, wonders at wondering

I, a universe of atoms
An atom in the universe

(This is obviously a response to one of my favorite poems, When I Have Fears That I May Cease To Be)


#Cosyne18, by the numbers

Where does the time go? Another year, another look at my favorite conference: Cosyne. Cosyne is a Computational and Systems Neuroscience conference, this year held in Denver. I think it’s useful to use it each year to assess where the field is and where it may be heading.

First is who is the most active – and this year it is Ken Harris who I dub this year’s Hierarch of Cosyne. The most active in previous years are:

  • 2004: L. Abbott/M. Meister
  • 2005: A. Zador
  • 2006: P. Dayan
  • 2007: L. Paninski
  • 2008: L. Paninski
  • 2009: J. Victor
  • 2010: A. Zador
  • 2011: L. Paninski
  • 2012: E. Simoncelli
  • 2013: J. Pillow/L. Abbott/L. Paninski
  • 2014: W. Gerstner
  • 2015: C. Brody
  • 2016: X. Wang
  • 2017: J. Pillow

If you look at the most across all of Cosyne’s history, well nothing ever changes.

Visualizing the network diagram of co-authorships reveals some of the structure in the computational neuroscience community (click for high-resolution PDF):and zooming in:

Plotting the network of the whole history of Cosyne is a mess – there are too many dense connections. Here are three other ways of looking at it. First, only plotting the superusers (people who have 20+ abstracts across Cosyne’s history, click for PDF):

Or alternately, the regulars (10+ abstracts across Cosyne’s history, click for PDF):

And, finally, the regulars + everyone they have collaborated with (click for PDF):

I’d say the long-term structure looks something like the New York Gang (green), the European Crew (purple), the High-Dimensional Deities (blue), the Ecstasy of Entropy (magenta), and some others that I can’t come up with good names for (comments welcome).

Memming asked whether the central cluster was getting more dispersed or less cliquey with time. This is kind of a hard question to answer. If you just look at how large the central connected group is over time the answer is a resounding no. The community is more cohesive and is more connected than ever before.

On the other hand, we can look within that central cluster. How tightly connected is it? If you look at mean path length – how long it takes to get from one author to another, like degrees of Kevin Bacon or an Erdos number (a Paninski number?) – then the largest cluster is becoming more dispersed. Dan Marinazzo suggested looking at the network efficiency as a metric that is more robust to size. Network efficiency is kind of the inverse of path length, where one would mean you can get from one author to another in a single step and 0 means it takes forever.

I now also have two years of segmented abstracts (both accepted and rejected). What are the most popular topics at Cosyne? I used doc2vec, a method that can take a document and embed it in a high-dimensional space that represents the semantic topics that are being used, and then visualized it with t-SNE. The Cosyne Island that you see above is the density of abstracts at each given point. I’ve given the different islands names that represent the abstracts in each of them.

If you look at the words that you see more in 2018’s accepted abstracts they are “movements”, “uncertainty”, “motion”; looks like behavior!

The rejected abstracts are “orientation”, “techniques”, “highdimensional”,”retinal”, “spontaneous” 😦

We can also look at words that are more likely to be accepted in 2018 than 2017 (which are the big gainers):

And the big losers this year versus last year:

Here is a list of the twitter accounts that will be at Cosyne.

Previous years: [201420152016, 2017]

What people mean when they say “maybe”

What is the probability that people perceive when they hear a word like ‘probably’ or ‘probably not’? Someone went and collected some data on this to get the actual probabilities!

Here is some old data:

[This is mostly a personal reminder so I can find this graph again]


Making MATLAB pretty

Alright all y’all haters, it’s MATLAB time.

For better or worse, MATLAB is the language that is used for scientific programming in neuroscience. But it, uh, has some issues when it comes to visualization. One major issue is the clusterfuck that is exporting graphics to vector files like eps. We have all exported a nice-looking image in MATLAB into a vectorized format that not only mangles the image but also ends up somehow needing thousands of layers, right?  Thankfully, Vy Vo pointed me to a package on github that is able to clean up these exported files.

Here is my favorite example (before, after):

If you zoom in or click the image, you can see the awful crosshatching in the before image. Even better, it goes from 11,775 layers before to just 76 after.

On top of this, gramm is a toolbox to add ggplot2-like visualization capabilities to MATLAB:

(Although personally, I like the new MATLAB default color-scheme – but these plotting functions are light-years better than the standard package.)

Update: Ben de Bivort shared his lab’s in-house preferred colormaps. I love ’em.

Update x2: Here’s another way to export your figures into eps nicely. Also, nice perceptually uniform color maps.


#Cosyne17, by the numbers (updated)


Cosyne is the Computational and Systems Neuroscience conference held every year in Salt Lake City (though – hold the presses – it is moving to Denver in 2018). It’s status as the keystone Computational and Systems Neuro conference makes it a fairly good representation of what the direction of the field is. Since I like data, here is this year’s Cosyne data dump.

First is who is the most active – and this year it is Jonathan Pillow who I dub this year’s Hierarch of Cosyne. The most active in previous years are:

  • 2004: L. Abbott/M. Meister
  • 2005: A. Zador
  • 2006: P. Dayan
  • 2007: L. Paninski
  • 2008: L. Paninski
  • 2009: J. Victor
  • 2010: A. Zador
  • 2011: L. Paninski
  • 2012: E. Simoncelli
  • 2013: J. Pillow/L. Abbott/L. Paninski
  • 2014: W. Gerstner
  • 2015: C. Brody
  • 2016: X. Wang


If you look at the total number of posters across all of Cosyne’s history, Liam Paninski is and probably always will be the leader. Evidently he was so prolific in the early years that they had to institute a new rule to nerf him like some overpowered video game character.

Visualizing the network diagram of co-authors also reveals a lot of structure in the conference (click for PDF):


And the network for the whole conference’s history is a dense mess with a soft and chewy center dominated by – you guessed it – the Paninski Posse (I am clustered into Sejnowski and Friends from my years at Salk).


People on twitter have seemed pretty excited about this data, so I will update this later with a link to a github repository.

Speaking of twitter, it is substantially more active than it has been in the past. Neuroscience Twitter keeps growing and is a great place to learn about new ideas in the field. Here is a feed of everyone that is attending that is on Twitter. Let me know if you want me to add you.

There are two events you should consider attending if you are at Cosyne: the Simons Foundation is hosting a social on Friday evening and on Saturday night there is a Hyperbolic Cosyne Party which you should RSVP to right away…!

On a personal note, I am giving a poster on the first night (I-49) and am co-organizing a workshop on Automated Methods for High-Dimensional Analysis. I hope to see you all there!

Previous years: [2014, 2015, 2016]


Update – I analyzed a few more things based on new data…


I was curious which institution had the most abstracts (measured by the presenting author’s institution.) Then I realized I had last year’s data:


Somehow I had not fully realized NYU was so dominant at this conference.

I also looked at which words are most enriched in accepted Cosyne abstracts:acceptedwords

Ilana said that she sees: behavior. What is enriched in rejected abstracts? Oscillations, apparently (this is a big topic of conversation so far) 😦rejectedwords

Finally, I clustered the most common words that co-occur in abstracts. The clusters?

  1. Modeling/population/activity (purple)
  2. information/sensory/task/dynamics (orange)
  3. visual/cortex/stimuli/responses (puke green)
  4. network/function (bright blue)
  5. models/using/data (pine green)



RIP Roger Tsien, 1952-2016


Sad news today – Roger Tsien passed away one week ago.

Can anyone imagine biology today without GFP? And though he is best known for that – he did share the Nobel prize for GFP, after all – Roger Tsien did much more. One of my favorite stories about Roger (and he was a character; you couldn’t be at UCSD without one or two Roger stories) came from a talk he gave while I was there. He was asked to step in at a somewhat late moment to give a series of three one-hour lectures on his life’s work. It was one of the best talks I have attended, clear and insightful and packed with interesting anecdotes.

He begun describing his struggles as a graduate student, how he had a hard time doing electrophysiology. His solution? He created BAPTA, the calcium chelator that is the basis for neural imaging. Yeah: electrophysiology too hard? Just create the basis for calcium imaging. Much easier. (And he was quite honest; it was much easier for him.)

You should of course read that original BAPTA paper (this is under-appreciated!). Then of course there is his GFP paper, establishing a point mutation with much improved characteristics. Here is his Nobel lecture. I don’t think it’s an understatement to say that these are just the tip of the iceberg.


Unrelated to all that, 8/20 edition

Will somebody please give Norm Macdonald another TV show? And why it’s partly his own fault. If you don’t like the moth joke, I don’t understand you.

The Irritating Gentleman (1874). By Berthold Woltze


The Heiman Lab turned their website into a text adventure. This is seriously amazing, though I’m a little upset it won’t let me eat the petri dish.

You will be able to order a self-driving Uber in Pittsburgh this month. Though obviously there will be someone sitting in the driver’s seat for safety reasons. Maybe the ride is free if you get in an accident?

Someone recorded their heart rate during a conference talk. Fear is the mind killer:heartrateduringtalk

When are babies born?


The Broad has spent $10 million this year and $5 million last year on the CRISPR dispute. Now we hear that a grad student claims he was working on CRISPR in the Zhang lab but the lab only really jumped on the work after seeing Doudna’s in vitro paper. The Broad says he was lying to get a job with Doudna and stay in the country.

Is it story that makes us read? Maybe not, but this essay has one of the worst defenses of its theses that I’ve seen in a while.

If you are a neuroscientist and want to see educated laypeople try to puzzle out  neuroscience by logic and anecdotes (unsuccessfully), try reading this post and its comment thread.


Unrelated to all that, 8/13 edition

UKIP (and brexit?) is driven by perception of immigration, not actual immigration.

What is the best classifier in machine learning? One paper suggested that we should all just go to random forests straight away. But then, maybe not. So… do whatever you were doing before? Anyway, thanks to this I learned about XGBoost.

When you eat a dried fig, you’re probably chewing fig-wasp mummies, too. Learn everything you can about figs and wasps and fig wasps!

Do animals fight wars and if so what was the largest war? This is the story of Argentine ant and its continent-spanning supercolonies.

The NIH postdoc salaries are out for the year and you can see the Obama pay raise! …if you are a 0- or 1-year postdoc, that is.

Here is the Standard Model of physics in one equation (click through for an explanation):


Are markets efficient? A good discussion between Fama and Thaler. Unsurprisingly, a lot of it comes down to semantics.

Did the human brain evolve to speak multiple languages? I think the historical evidence is a resounding yes.

A ‘brief’ history of neural networks. I liked this from part four:

So, why indeed, did purely supervised learning with backpropagation not work well in the past? Geoffrey Hinton summarized the findings up to today in these four points:

Our labeled datasets were thousands of times too small.
Our computers were millions of times too slow.
We initialized the weights in a stupid way.
We used the wrong type of non-linearity.


Unrelated to all that, 8/6 edition

Ten simple rules for writing a postdoctoral fellowship and Ten simple rules for curating and facilitating small workshops

The super-recognizers of Scotland Yard:

In 2007, Neville had set up a unit to collate and circulate images of unidentified criminals captured on CCTV. Officers were asked to check the Met’s “Caught on Camera” notices to see if they knew any of the suspects. “It became apparent that some officers were much better than others,” Neville told me. “For example, if I received 100 names, some officers would have submitted ten or 15, while in the main they were one-off identifications.”…Met officers trawled through tens of thousands of hours of CCTV footage, ­identifying 609 suspects responsible for looting, arson and other criminal acts. One officer, PC Gary Collins, made 180 identifications, including that of one of the most high-profile suspects…

See also this paper on super-recognizers

National Geographic Travel Photographer of the Year 2016

Even though there were a lot of people in Ben Youssef, still here was more quiet and relaxing compare to the street outside in Marrakesh. I was waiting for the perfect timing to photograph for long time.

Even though there were a lot of people in Ben Youssef, still here was more quiet and relaxing compare to the street outside in Marrakesh. I was waiting for the perfect timing to photograph for long time.

I found this open on my mobile browser window one day and I don’t remember why, but am thankful to Past Me:


“I have a great idea! Let’s get neural networks to build faces!”

ANN faces

OTOH they can infer 3D images of cars pretty well (this is a Big Deal)

What are some of the best algorithms of the 20th century?

We’ve been understanding the Faraday cage all wrong:

So I started looking in books and talking to people and sending emails. In the books, nothing! Well, a few of them mention the Faraday cage, but rarely with equations. And from experts in mathematics, physics, and electrical engineering, I got oddly assorted explanations. They said the skin depth effect was crucial, or this was an application of the theory of waveguides, or the key point was Babinet’s principle, or it was Floquet theory, or “the losses in the wires will drive everything…” And then at lunch one day, colleague n+1n+1 told me, it’s in the Feynman Lectures [2]! And sure enough, Feynman gives an argument that appears to confirm the exponential intuition exactly. [The problem is, it is wrong.]

Everything you wanted to know about hair

Every number tells a story


CSHL Vision Course


I have just returned from two weeks in Cold Spring Harbor at the Computational Neuroscience: Vision course. I was not entirely sure what to expect. Maybe two weeks of your standard lectures? No – this was two weeks of intense scientific discussion punctuated with the occasional nerf fight (sometimes during lectures, sometimes not), beach bonfire, or table tennis match.

It is not just the material that was great but the people. Every day brought in a fresh rotation of scientists ready to spend a couple of days discussing their work – and the work of the field – and to just hang out. I learned as much or more at the dinner table as I did in the seminar room. But it wasn’t just the senior scientists that were exhilarating but the other students. It is a bit intimidating seeing how much talent exists in the field… And how great they are as people.

I also found that the graduate students at CSHL have the benefit of attending these courses (for free). It was great to meet people from all of the labs and hear about the cool stuff going on. Of course, they live pretty well, too. Here is the back patio of my friend’s house:


I think I could get used to that?

Anyway, this is all a long-winded way of saying: if you get the chance, attend one of these courses! And being there motivated me to start making more of an effort to update the blog again. I swear, I swear…