Scientists like me

I wanted to know how to find other scientists doing similar (but different!) work to me. I like to think that I know most of the people working on nearby topics, but what about people who take similar approaches on totally different problems? There were a lot of good suggestions (especially the neuromatch algorithm), but I want to highlight two in particular:

Michael Hendricks mentioned the Journal/Author Name Estimator (JANE), which takes your abstracts and tries to figure out who you are most similar to. When I throw in a few of my abstracts I mostly get C. elegans people I know:

Annika Barber had an even better suggestion; playing around with NIH Matchmaker to find grants that are similar. It pulls up a lot of really interesting projects! Especially when I remove the name of my model organism.

3% of Neuroscientists are here for revenge

I was curious how people got into neuroscience. Random happenstance? A lifelong love of gap junctions?  So I asked about it on twitter and got hundreds of responses.

I did a quick analysis of about half the responses, putting them in different categories. It quickly became clear that certain themes were popping up again and again:

It doesn’t surprise me too much that a lot of people became interested in neuroscience for a special reason: they cared about learning or decision-making or free will. A lot of you are here because a particular book or lecture was so good it blew you away. I was surprised by the number of people who were accidentally exposed to neuroscience because the class they wanted to take was filled, or it was a distributional requirement at their university, or there happened to be someone next door who was doing research on it. It turns out that serendipity is a major driver of passion!

This suggests that the best way to get other people interested in neuroscience might be to just explain it to them.

Definite shout-out to the 9% of you who are here because of the drugs and 3% of you who are doing this for revenge.

#cosyne2020, by the numbers

Cosyne is the largest COmputational and SYstems NEuroscience conference. Many many years ago, I thought it would be a good idea to study the conference. Who goes? Who dominates the conference? If this is the place where people come to exchange ideas, it is useful to know who is doing that and who is dominating the conversation.

The first thing I look at is who is most active (who is an author on the most abstracts) – and this year it is a four-way tie between Larry Abbott, Mehrdad Jazayeri, Jonathan Pillow, and Byron Yu who I dub this year’s Hierarchs of Cosyne. The most active in previous years are:

  • 2004: L. Abbott/M. Meister
  • 2005: A. Zador
  • 2006: P. Dayan
  • 2007: L. Paninski
  • 2008: L. Paninski
  • 2009: J. Victor
  • 2010: A. Zador
  • 2011: L. Paninski
  • 2012: E. Simoncelli
  • 2013: J. Pillow/L. Abbott/L. Paninski
  • 2014: W. Gerstner
  • 2015: C. Brody
  • 2016: X. Wang
  • 2017: J. Pillow
  • 2018: K. Harris
  • 2019: J. Pillow
  • 2020: L. Abbott/M. Jazayeri/J. Pillow/B. Yu

If you look at the most across all of Cosyne’s history, you can see things shift. Looking across time below, you can see that Jonathan Pillow is starting to catch up with Liam Paninski and is breaking away from Larry Abbott. The other startling ascent is Carlos Brody – there’s a whole lot of Princeton going on at Cosyne.

What are in the abstracts? In the past I have tried to find words that are more common in accepted than rejected abstracts. I can visualize this using everyone’s favorite data visualization technique, WOOOORD CLOOOUUUDS. If you wanted to get accepted, it would have been better to write about decision-making trajectories using stable optogenetic attractor choices and worse to write about intrinsic geometry algorithms in tools and datasets.

What is more common in accepted abstracts today than when Cosyne was in Denver two years ago? There are fewer intrinsic dendritic attention pathways and more primate context shape timescales.

At the Cognitive Computational Neuroscience (CCN) conference last fall, Richard Gao presented a super cool poster where he analyzed conference abstracts from different computational neuroscience conferences and used word2vec to make useful embeddings (which is hard! – I have tried this before and failed). I asked him if he could take a crack at this year’s Cosyne abstracts and he was kind enough to agree.

Just looking at the most common topics it looks like recurrent and deep networks are, uh, very popular.

His word2vec embedding representations only need about ~5 PCs to capture most of the variance in the words. COSYNE is low-dimensional 😦

But this is really cool: he used UMAP to look at how similar the embeddings were in different topics. It looks to me like there are classic sensory/processing abstracts in the top left, decision-making in the top right, and models on the bottom? Maybe?

And if he performs hierarchical clustering:

Finally Richard can look at how similar words are in the abstracts. What is most similar to dimensionality-reduction? RNNs.

What are most like MANIFOLDS? Deep networks, population coding, and cerebellum (???).

Who looks at oscillations? People who study pyramidal neurons and hippocampus.

All of this can be found in a notebook at Richard’s GitHub.

Finally, how is everyone connected? I have plotted everyone who is attending Cosyne2020, where connections are between any two people who have co-authored an abstract. Please note that for technical and historical reasons, I find authors by (first initial, lastname). This leads to some ambiguity because sometimes two people share this ID.

Click the picture for a high-res PDF.

There are two many people who have attended Cosyne throughout the years to meaningfully visualize everyone so I have split them into two groups. The Superusers – people who have been an author on 10+ abstracts, and their co-authors who have also been on 10+ abstracts.

Probably more interesting to most people is the graph of the Regulars – people who have been on 5+ abstracts.

I’m just going to pull out a few (colored) clusters here:

And finally, the connected components of everyone at Cosyne2020!


Please help me identify neuroscientists hired as tenure-track assistant profs in the 2018-19 faculty job season

For the past two years, I tried to crowd-source a complete list of everyone who got hired into a neuroscience faculty job over the previous year. I think the list has almost everyone who was hired in the US… let’s see if we can do better this year?

I posted an analysis of some of the results here – one of the key “surprises” was that no, you don’t actually need a Cell/Nature/Science paper to get a faculty job.

If you know who was hired to fill one or more of the listed N. American assistant professor positions in neuroscience or an allied field, please email me with this information (

To quote the requirements:

I only want information that’s been made publicly available, for instance via an official announcement on a departmental website, or by someone tweeting something like “I’ve accepted a TT job at Some College, I start Aug. 1!” If you want to pass on the information that you yourself have been hired into a faculty position, that’s fine too. All you’re doing is saving me from googling publicly-available information myself to figure out who was hired for which positions. Please do not contact me to pass on confidential information, in particular confidential information about hiring that has not yet been totally finalized.

Please do not contact me with nth-hand “information” you heard through the grapevine. Not even if you’re confident it’s reliable.

I’m interested in positions at all institutions of higher education, not just research universities. Even if the position is a pure teaching position with no research duties.

Can we even understand what the responses of a neuron ‘represent’?


  • Deep neural networks are forcing us to rethink what it means to understand what a neuron is doing
  • Does it make sense to talk about a neuron representing a single feature instead of a confluence of features (a la Kording)? Is understanding the way a neuron responds in one context good enough?
  • If a neural response is correlated with something in the environment, does it represent it?
  • There is a difference in understanding encoding versus decoding versus mechanistic computations
  • Picking up from an argument on twitter
  • I like the word manticore

What if I told you that the picture above – a mythical creature called a manticore – had a representation of a human in it? You might nod your head and say yes, that has some part of a human represented in it. Now what if I told you it had a representation of a lion? Well you might hem and haw a bit more, not sure if you’d ever seen a grey lion before, or even a lion with a body that looked quite like that, but yes, you’d tentatively say. You can see a kind of representative lion in there.

Now I go further. That is also a picture that also represents a giraffe. Not at all, you might say. But I’d press – it has four legs. It has a long body. A tail, just like a giraffe. There is some information there about what a giraffe looks like. You’d like at me funny and shrug your shoulders and say sure, why not. And then I’d go back to the beginning and say, you know what, this whole conversation is idiotic. It’s not representative of a human or a lion or a giraffe. It’s a picture of a manticore for god’s sake. And we all know that!

Let’s chat about the manticore theory of neuroscience.

One of the big efforts in neuroscience – and now in deep neural networks – has been to identify the manticores. We want to know why this neuron is responding – is it responding to a dark spot, a moving grating, an odor, a sense of touch, what? In other words, what information is represented in the neurons responses? And in deep networks we want to understand what is going on at each stage of the network so we can understand how to build them better. But because of the precise mathematical nature of the networks, we can understand every nook and cranny of them a bit better. This more precise understanding of artificial network responses seems to be leading to a split between neuroscientists and those who work with Deep Networks on how to think about what neurons are representing.

This all started with the Age of Hubel and Wiesel: they found that they could get visual neurons to fire by placing precisely located circles and lines in front of an animal. This neuron responded to a dark circle hereThat neuron responded to a bright line there. These neurons are representing the visual world through a series of dots and dashes.

And you can continue up the neural hierarchy and the complexity of stimuli you present to animals. Textures, toilet brushes, faces, things like this. Certain neurons look like they code for one thing or another.

But neurons aren’t actually so simple. Yes, this neuron may respond to any edge it sees on an object but it will also respond differently if the animal is running. So, maybe it represents running? And it also responds differently if there is sound. So, it represents edges and running and sound? Or it represents edges differently when there is sound?

This is what those who work with artificial neural networks appreciate more fully than us neuroscientists. Neurons are complex machines that respond to all sorts of things at the same time. We are like the blind men and the elephant, desperately trying to grasp at what this complex beast of responses really us. But there is also a key difference here. The blind men come to different conclusions about the whole animal after sampling just a little bit of the animal and that is not super useful. Neuroscientists have the advantage that they may not care about every nook and cranny of the animal’s response – it is useful enough to explain what the neuron responds to on average, or in this particular context.

Even still, it can be hard to understand precisely what a neuron, artificial or otherwise, is representing to other neurons. And the statement itself – representation – can mean some fundamentally different things.

How can we know what a neuron – or collection of neurons are representing? One method that has been used has been to present a bunch of different stimuli to a network and ask what it responds to. Does it respond to faces and not cars in one layer? Maybe motion and not still images in another?

You can get even more careful measurements by asking about a precise mathematical quantity, mutual information, that quantifies how much of a relationship there is between these features and neural responses.

But there are problems here. Consider the (possibly apocryphal) story about a network that was trained to detect the difference between Russian and American tanks. It worked fantastically – but it turned out that it was exploiting the fact that one set of pictures were taken in the sunlight and another set was taken when it was cloudy. What, then, was the network representing? Russian and American tanks? Light and dark? More complex statistical properties relating to the coordination of light-dark-light-dark that combines both differences in tanks and differences in light intensities, a feature so alien that we would not even have a name to describe it?

At one level, it clearly had representations of Russian tanks and American tanks – in the world it had experienced. In the outside world, if it was shown a picture of a bright blue sky it may exclaim, “ah, [COUNTRY’S] beautiful tank!” But we would map it on to a bright sunny day. What something is representing only makes sense in the context of a particular set of experiences. Anything else is meaningless.

Similarly, what something is representing only makes sense in the context of what it can report. Understanding this context has allowed neuroscience to make strides in understanding what the nervous system is doing: natural stimuli (trees and nature and textures instead of random white noise) have given us a more intimate knowledge of how the retina functions and the V2 portion of visual cortex.

We could also consider a set of neurons confronted with a ball being tossed up and down and told to respond with where it was. If you queried the network to ask whether it was using the speed of the ball to make this decision you would find that there was information about the speed! But why? Is it because it is computing the flow of the object through time? Or is it because the ball moves fastest when it is closed to the hand (when it is thrown up with force, or falls down with gravity) and is slowest when it is high up (as gravity inevitably slows it down and reverses its course)? Yes, you could now read out velocity if you wanted to – in that situation.

There are only two ways to understand what it is representing: in the context of what it is asked to report, and if you understand precisely the mechanistic series of computations that gives rise to the representation – and it maps on to ‘our’ definition.

Now ask yourself a third question: what if the feature were somehow wiped from the network and it did fine at whatever task it was set to? Was it representing the feature before? In one sense, no: the feature was never actually used, it was just some noise in the system that happened to correlate with something we thought was important. In another sense, yes: clearly the representation was there because we could specifically remove that feature! It depends on what you mean by representation and what you want to say about it.

This is the difference in encoding vs decoding. It is important to understand what a neuron is encoding because we do not know the full extent of what could happen to it or where it came from. It is equally important to understand what is decoded from a neuron because this is the only meaningfully-encoded information! In a way, this is the difference between thinking about neurons as passive encoders versus active decoders.

The encoding framework is needed because we don’t know what the neuron is really representing, or how it works in a network. We need to be agnostic to the decoding. However, ultimately what we want is an explanation for what information is decoded from the neuron – what is the meaningful information that it passes to other neurons. But this is really, really hard!

Is it even meaningful to take about a representation otherwise?

Ultimately, if we are to understand how nervous systems are functioning, we need to understand a bit of all of these concepts. But in a time when we can start getting our hands around the shape of the manticore by mathematically probing neural responses, we also need to understand, very carefully, what we mean when we say “this neuron is representing X”. We need to understand that sometimes we want to know everything about how a neuron responds and sometimes we want to understand how it responds in a given context. Understanding the manifold of possible responses for a neuron, on the other hand, may make things too complex for our human minds to get a handle on. The very particulars of neural responses are what give us the adversarial examples in artificial networks that seem so wrong. Perhaps what we really want is to not understand the peaks and troughs of the neural manifold, but some piecewise approximations that are wrong but understandable and close enough.

Other ideas

  • Highly correlated objects will provide information about each other. May not show information in different testing conditions
  • What is being computed in high-D space?
  • If we have some correlate of a feature, does it matter if it is not used?
  • What we say we want when we look for representations is intent or causality
  • This is a different concept than “mental representation
  • Representation at a neuron level or network level? What if one is decodable and one is not?
  • We think of neurons as passive encoders or simple decoders, but why not think of them as acting on their environment in the same way as any other cell/organism? What difference is there really?
  • Closed loop vs static encoders
  • ICA gives you oriented gabors etc. So: is the representation of edges or is the representation of the independent components of a scene?

Thoughts on freedom of will

Some rough notes from a personal attempt to clarify my own thinking. Consider this a work-in-progress and probably wrong.

Principles underlying feeling of freedom of will (note that this is different from freedom of action or actual agency):

  1. Bayesian updating (distribution of decisions)
  2. State/neural dynamics are unpredictable in the future
  3. Accessibility of internal state
  4. Web of concepts
  5. Feedforward-ish network

What is it that makes someone believe themselves to be free? That they can exercise willpower in pursuit of moral agency?

There are a few concepts from neuroscience that we want to understand for the feeling of freedom in a deterministic world.

The first concept we have to understand is that the brain is a neural network. It has neurons connected to other neurons connected to other neurons. We like to imagine them as step-by-step programs that slowly process information and allow an organism to make a decision but in reality they are vastly interconnected. Still, there is some validity to levels of feedforward-ness! The important thing is that not every neuron has access to the information from every other neuron – and not every ‘layer’, or collection of neurons, has access to information from every other ‘layer’.

The second concept is that the brain is interested in the fundamental unpredictability of the future. There are informational constraints on knowledge about what the internal state of a system will be, in both the sense of the current dynamics and the broader sense of ‘hunger’, ‘thirst’, ‘sleepy’, etc. Each of these modulates the activity of different layers of the brain in a variety of ways, often through neuromodulatory pathways. Further, as a corollary of this and the first concept, different layers have imperfect access to the state of other neurons and other layers.

The third concept is the implementation of ideas and memories as a web of concepts. Smelling an odor can transport you back to the first time you smelled it, with all the associated feelings. No idea exists in isolation but accessing one will inevitably access a variety of others (to a greater or lesser extent). This is foundational to how memories are stored.

The fourth and final concept is the Bayesian Brain, the idea that brains represent probability distributions and compute in terms of inference. No 19th century symbolic thinking for us – we process information in a manner that fundamentally requires us to think of the distribution of possible options or possible futures.

How does this give rise to a feeling of freedom of will? Suppose you were considering whether you were going to perform some action or not. When considering which of various possibilities you will take, there is some set of neurons making this consideration. These neurons do not have access to all of the possible neurons involved in the decision. In other words, these neurons do not know which possible action they will take. Instead they must operate on the set of probabilities for the future state of the network. The operation on this distribution is the feeling of free will – that there is an internal state, or internal dynamics, that are inaccessible from the consideration. These neurons get input from the dynamics and can plausibly provide output (the feeling of willpower).

Imagining the future is similar. Will I have the possibility of choosing freely among many options? Yes, of course, because I am unclear what my future state will be. The probability distribution conditional on external and internal state is not unitary (though it can be with properly powerful external stimuli).

Could I choose to eat bacon and eggs this morning instead of my daily bowl of granola? Possibly; it exists in probability space of possible options. Will I do so? I can sit there and sample my internal state and have some uncertainty about whether I desire that or not. That makes me feel free: I will choose to do so or not do so, unconstrained by outside forces, even though it is entirely deterministic.

Fundamentally, the feeling of freedom is a reflection of probabilistic and uncertain thinking.

#Cosyne17, by the numbers (updated)


Cosyne is the Computational and Systems Neuroscience conference held every year in Salt Lake City (though – hold the presses – it is moving to Denver in 2018). It’s status as the keystone Computational and Systems Neuro conference makes it a fairly good representation of what the direction of the field is. Since I like data, here is this year’s Cosyne data dump.

First is who is the most active – and this year it is Jonathan Pillow who I dub this year’s Hierarch of Cosyne. The most active in previous years are:

  • 2004: L. Abbott/M. Meister
  • 2005: A. Zador
  • 2006: P. Dayan
  • 2007: L. Paninski
  • 2008: L. Paninski
  • 2009: J. Victor
  • 2010: A. Zador
  • 2011: L. Paninski
  • 2012: E. Simoncelli
  • 2013: J. Pillow/L. Abbott/L. Paninski
  • 2014: W. Gerstner
  • 2015: C. Brody
  • 2016: X. Wang


If you look at the total number of posters across all of Cosyne’s history, Liam Paninski is and probably always will be the leader. Evidently he was so prolific in the early years that they had to institute a new rule to nerf him like some overpowered video game character.

Visualizing the network diagram of co-authors also reveals a lot of structure in the conference (click for PDF):


And the network for the whole conference’s history is a dense mess with a soft and chewy center dominated by – you guessed it – the Paninski Posse (I am clustered into Sejnowski and Friends from my years at Salk).


People on twitter have seemed pretty excited about this data, so I will update this later with a link to a github repository.

Speaking of twitter, it is substantially more active than it has been in the past. Neuroscience Twitter keeps growing and is a great place to learn about new ideas in the field. Here is a feed of everyone that is attending that is on Twitter. Let me know if you want me to add you.

There are two events you should consider attending if you are at Cosyne: the Simons Foundation is hosting a social on Friday evening and on Saturday night there is a Hyperbolic Cosyne Party which you should RSVP to right away…!

On a personal note, I am giving a poster on the first night (I-49) and am co-organizing a workshop on Automated Methods for High-Dimensional Analysis. I hope to see you all there!

Previous years: [2014, 2015, 2016]


Update – I analyzed a few more things based on new data…


I was curious which institution had the most abstracts (measured by the presenting author’s institution.) Then I realized I had last year’s data:


Somehow I had not fully realized NYU was so dominant at this conference.

I also looked at which words are most enriched in accepted Cosyne abstracts:acceptedwords

Ilana said that she sees: behavior. What is enriched in rejected abstracts? Oscillations, apparently (this is a big topic of conversation so far) 😦rejectedwords

Finally, I clustered the most common words that co-occur in abstracts. The clusters?

  1. Modeling/population/activity (purple)
  2. information/sensory/task/dynamics (orange)
  3. visual/cortex/stimuli/responses (puke green)
  4. network/function (bright blue)
  5. models/using/data (pine green)


Sophie Deneve and the efficient neural code

Neuroscientists have a schizophrenic view of how neurons. On the one hand, we say, neurons are ultra-efficient and are as precise as possible in their encoding of the world. On the other hand, neurons are pretty noisy, with the variability in their spiking increasing with the spike rate (Poisson spiking). In other words, there is information in the averaged firing rate – so long as you can look at enough spikes. One might say that this is a very foolish way to construct a good code to convey information, and yet if you look at the data that’s where we are*.

Sophie Deneve visited Princeton a month or so ago and gave a very insightful talk on how to reconcile these two viewpoints. Can a neural network be both precise and random?

Screen Shot 2016-04-23 at 11.06.22 AM Screen Shot 2016-04-23 at 11.06.27 AM

The first thing to think about is that it is really, really weird that the spiking is irregular. Why not have a simple, consistent rate code? After all, when spikes enter the dendritic tree, noise will naturally be filtered out causing spiking at the cell body to become regular. We could just keep this regularity; after all, the decoding error of any downstream neuron will be much lower than for the irregular, noisy code. This should make us suspicious: maybe we see Poisson noise because there is something more going on.

We can first consider any individual neuron as a noisy accumulator of information about its input. The fast excitation, and slow inhibition of an efficient code makes every neuron’s voltage look like a random walk across an internal landscape, as it painstakingly finds the times when excitation is more than inhibition in order to fire off its spike.

So think about a network of neurons receiving some signal. Each neuron of the network is getting this input, causing its membrane voltage to quake a bit up and a bit down, slowly increasing with time and (excitatory) input. Eventually, it fires. But if the whole network is coding, we don’t want anything else to fire. After all, the network has fired, it has done its job, signal transmitted. So not only does the spike send output to the next set of neurons but it also sends inhibition back into the network, suppressing all the other neurons from firing! And if that neuron didn’t fire, another one would have quickly taken its coding


This simple network has exactly the properties that we want. If you look at any given neuron, it is firing in a random fashion. And yet, if you look across neurons their firing is extremely precise!

* Okay, the code is rarely actually Poisson. But a lot of the time it is close enough.


Denève, S., & Machens, C. (2016). Efficient codes and balanced networks Nature Neuroscience, 19 (3), 375-382 DOI: 10.1038/nn.4243

When did we start using information theory in neuroscience?

This question came up in journal club a little while ago.

The hypothesis that neurons in the brain are attempting to maximize their information about the world is a powerful one. Although usually attributed to Horace Barlow, the idea arose almost immediately after Shannon formalized his theory of information.

Remember, Shannon introduced information theory in 1948. Yet only four years later, MacKay and McCulloch (of the McCulloch-Pitts neuron!) published an article analyzing neural coding from the perspective of information theory. By assuming that a neuron is a communication channel, they wanted to understand what is the best ‘code’ for a neuron to use – a question which was already controversial in the field (it seems as if the dead will never die…). Specifically, they wanted to compare whether the occurrence of a spike was the informative signal or whether it was the time since the previous spike. They found, based on information theory, that it is the interval from the previous spike that can signal the most information.

And for those who want to break into the analog vs digital coding they have this to say:

nor is it our purpose in the following investigation to reopen the “analogical versus digital” question, which we believe to represent an unphysiological antithesis. The statistical nature of nervous activity must preclude anything approaching a realization in practice of the potential information capacity of either mechanism, and in our view the facts available are inadequate to justify detailed theorization at the present time

Around the same time, Von Neumann – of course it would be Von Neumann! – delivered a series of lectures analyzing coding from the perspective of idealized neurons of the McCulloch-Pitts variety. Given that these were lectures around the time of the publication of the work in the preceding paragraph, I am guessing that he knew of their work – but maybe not!

In 1954, Attneave looked at how visual perception is affected by information and the redundancy in the signal. He provides by far the most readable paper of the bunch. Here is the opening:

In this paper I shall indicate some of the ways in which the concepts and techniques of information theory may clarify our understanding of visual perception. When we begin to consider perception as an information-handling process, it quickly becomes clear that much of the information received by any higher organism is redundant. Sensory events are highly interdependent in both space and time: if we know at a given moment the states of a limited number of receptors (i.e., whether they are firing or not firing), we can make better-than-chance inferences with respect to the prior and subsequent states of these receptors, and also with respect to the present, prior, and subsequent states of other receptors.

He also has this charming figure:

Attneave's cat

What Attneave’s Cat demonstrates is that most of the information in the visual image of the cat – the soft curves, the pink of the ears, the flexing of the claws – are totally irrelevant to the detection of the cat. All you need is a few points with straight lines connecting them, and this redundancy is surely what the nervous system is relying on.

Finally, in 1955 there was a summer research school thingamajig hosted by Shannon, Minsky, McCarthy and Rochester with this as one of the research goals:

1. Application of information theory concepts to computing machines and brain models. A basic problem in information theory is that of transmitting information reliably over a noisy channel. An analogous problem in computing machines is that of reliable computing using unreliable elements. This problem has been studies by von Neumann for Sheffer stroke elements and by Shannon and Moore for relays; but there are still many open questions. The problem for several elements, the development of concepts similar to channel capacity, the sharper analysis of upper and lower bounds on the required redundancy, etc. are among the important issues. Another question deals with the theory of information networks where information flows in many closed loops (as contrasted with the simple one-way channel usually considered in communication theory). Questions of delay become very important in the closed loop case, and a whole new approach seems necessary. This would probably involve concepts such as partial entropies when a part of the past history of a message ensemble is known.

Shannon of course tried to have is cake and eat it too by warning of the dangers of misused information theory. If you are interested in more on the topic, Dimitrov, Lazar and Victor have a great review.

So there you go – it is arguably MacKay, McCulloch, Von Neumann, and Attneave who are the progenitors of Information Theory in Neuroscience.


Attneave, F. (1954). Some informational aspects of visual perception. Psychological Review, 61 (3), 183-193 DOI: 10.1037/h0054663

Dimitrov, A., Lazar, A., & Victor, J. (2011). Information theory in neuroscience Journal of Computational Neuroscience, 30 (1), 1-5 DOI: 10.1007/s10827-011-0314-3

MacKay, D., & McCulloch, W. (1952). The limiting information capacity of a neuronal link The Bulletin of Mathematical Biophysics, 14 (2), 127-135 DOI: 10.1007/BF02477711

von Neumann (1956). Probabilistic logics and the synthesis of reliable organisms from unreliable components Automata Studies

Logothetis, animal rights extremists, and support

While I was on an accidental blogging sabbatical, Nikos Logothetis stopped his work on non-human primates because of pressure from animal rights groups:

Logothetis’s research on the neural mechanisms of perception and object recognition has used rhesus macaques with electrode probes implanted in their brains. The work was the subject of a broadcast on German national television in September that showed footage filmed by an undercover animal rights activist working at the institute. The video purported to show animals being mistreated.

Logothetis has said the footage is inaccurate, presenting a rare emergency situation following surgery as typical and showing stress behaviors deliberately prompted by the undercover caregiver. (His written rebuttal is here.) The broadcast triggered protests, however, and it prompted several investigations of animal care practices at the institute. Investigations by the Max Planck Society and animal protection authorities in the state of Baden-Württemberg found no serious violations of animal care rules. A third investigation by local Tübingen authorities that led to a police raid at the institute in late January is still ongoing.

Although this has been covered well elsewhere, I figured it was worth posting because it has seemed to disappear into the ether of conversation. It’s just last week’s news! But the effects of are long-lasting. The Center for Integrative Neuroscience, where Logothetis works, has a motion for solidarity which you should take a moment to sign.

His most-cited paper used monkeys to compare local field potentials (neural electrical activity) and fMRI BOLD signals. Here are two relevant figures comparing the two:


He has many good papers studying vision. He also tried studying consciousness using vision once upon a time. So there’s that.