Sleep – what is it good for (absolutely nothing?)

Sleep can often feel like a fall into annihilation and rebirth. One moment you have all the memories and aches of a long day behind you, the next you wake up from nothingness into the start of something new. Or: a rush from some fictional slumber world into an entirely different waking world. What is it that’s happening inside your head? Why this rest?

Generally the answer that we are given is some sort of spring cleaning and consolidation, a removal of cruft and a return to what is important (baseline, learned). There is certainly plenty of evidence that the brain is doing this while you rest. One of the most powerful of these ‘resetting’ mechanisms is homeostatic plasticity. Homeostatic plasticity often feels like an overlooked form of learning, despite the gorgeous work being done on it (Gina Turrigiano and Eve Marder’s papers have been some of my all-time favorite work for forever).

One simple experiment that you can do to understand homeostatic plasticity is to take a slice of a brain and dump TTX on it to block sodium channels and thus spiking. When you remove it days later, the neurons will be spiking like crazy. Slowly, they will return to their former firing rate. It seems like every neuron knows what its average spiking should be, and tries to reach it.

But when does it happen? I would naively think that it should happen while you are asleep, while your brain can sort out what happened during the day, reorganize, and get back where it wants to be. Let’s test that idea.

Take a rat and at a certain age, blind one eye. Then just watch how visual neurons change their overall firing rate. Like so:Screen Shot 2016-04-03 at 11.26.07 AMdarklight-homeostasis

At first the firing rate goes down. There is no input! Why should they be doing anything? Then, slowly but surely the neuron goes back to doing what it did before it was blinded. Same ol’, same ol’. Let’s look at what it’s doing when the firing rate is returning to its former life:

sleep-homeostasisThis is something of a WTF moment. Nothing during sleep, nothing at all? Only when it is awake and – mostly – behaving? What is going on here?

Here’s my (very, very) speculative possibility: something like efference copy. When an animal is asleep, it’s getting nothing new. It doesn’t know that anything is ‘wrong’. Homeostatic plasticity may be ‘returning to baseline’, but it may also be ‘responding to signals the same way on average’. And when it is asleep, what signals are there? But when it is moving – ah, that is when it gets new signals.

When the brain generates a motor signal, telling the body to move, it also sends signals back to the sensory areas of the brain to let it know what is going on. Makes it much easier to keep things stable when you already know that the world is going to move in a certain way. Perhaps – perhaps – when it is moving, it is getting the largest error signals from the brain, the largest listen to me signals, and that is exactly when the homeostatic plasticity should happen: when it knows what it has something to return to baseline in respect to.

Reference

Hengen, K., Torrado Pacheco, A., McGregor, J., Van Hooser, S., & Turrigiano, G. (2016). Neuronal Firing Rate Homeostasis Is Inhibited by Sleep and Promoted by Wake Cell, 165 (1), 180-191 DOI: 10.1016/j.cell.2016.01.046

Mathematicians on planes: be careful of your sorcerous ways

menzio

Guido Menzio, an economist at UPenn, was on a plane, obsessively deriving some mathematical formulae, when…:

She decided to try out some small talk.

Is Syracuse home? She asked.

No, he replied curtly.

He similarly deflected further questions. He appeared laser-focused — perhaps too laser-focused — on the task at hand, those strange scribblings.

Rebuffed, the woman began reading her book. Or pretending to read, anyway. Shortly after boarding had finished, she flagged down a flight attendant and handed that crew-member a note of her own…

this quick-thinking traveler had Seen Something, and so she had Said Something.

That Something she’d seen had been her seatmate’s cryptic notes, scrawled in a script she didn’t recognize. Maybe it was code, or some foreign lettering, possibly the details of a plot to destroy the dozens of innocent lives aboard American Airlines Flight 3950. She may have felt it her duty to alert the authorities just to be safe. The curly-haired man was, the agent informed him politely, suspected of terrorism.

The curly-haired man laughed.

He laughed because those scribbles weren’t Arabic, or some other terrorist code. They were math.

Yes, math. A differential equation, to be exact.

…His nosy neighbor had spied him trying to work out some properties of the model of price setting he was about to present. Perhaps she couldn’t differentiate between differential equations and Arabic.

Somehow, this is not from The Onion.

Sophie Deneve and the efficient neural code

Neuroscientists have a schizophrenic view of how neurons. On the one hand, we say, neurons are ultra-efficient and are as precise as possible in their encoding of the world. On the other hand, neurons are pretty noisy, with the variability in their spiking increasing with the spike rate (Poisson spiking). In other words, there is information in the averaged firing rate – so long as you can look at enough spikes. One might say that this is a very foolish way to construct a good code to convey information, and yet if you look at the data that’s where we are*.

Sophie Deneve visited Princeton a month or so ago and gave a very insightful talk on how to reconcile these two viewpoints. Can a neural network be both precise and random?

Screen Shot 2016-04-23 at 11.06.22 AM Screen Shot 2016-04-23 at 11.06.27 AM

The first thing to think about is that it is really, really weird that the spiking is irregular. Why not have a simple, consistent rate code? After all, when spikes enter the dendritic tree, noise will naturally be filtered out causing spiking at the cell body to become regular. We could just keep this regularity; after all, the decoding error of any downstream neuron will be much lower than for the irregular, noisy code. This should make us suspicious: maybe we see Poisson noise because there is something more going on.

We can first consider any individual neuron as a noisy accumulator of information about its input. The fast excitation, and slow inhibition of an efficient code makes every neuron’s voltage look like a random walk across an internal landscape, as it painstakingly finds the times when excitation is more than inhibition in order to fire off its spike.

So think about a network of neurons receiving some signal. Each neuron of the network is getting this input, causing its membrane voltage to quake a bit up and a bit down, slowly increasing with time and (excitatory) input. Eventually, it fires. But if the whole network is coding, we don’t want anything else to fire. After all, the network has fired, it has done its job, signal transmitted. So not only does the spike send output to the next set of neurons but it also sends inhibition back into the network, suppressing all the other neurons from firing! And if that neuron didn’t fire, another one would have quickly taken its place.network coding

 

This simple network has exactly the properties that we want. If you look at any given neuron, it is firing in a random fashion. And yet, if you look across neurons their firing is extremely precise!

* Okay, the code is rarely actually Poisson. But a lot of the time it is close enough.

References

Denève, S., & Machens, C. (2016). Efficient codes and balanced networks Nature Neuroscience, 19 (3), 375-382 DOI: 10.1038/nn.4243

I have seen things you wouldn’t believe (in my mind)

When two different people perceive blue, is it the same to both of them? When two people imagine it, is it the same? Can everyone even imagine it?

If I tell you to imagine a beach, you can picture the golden sand and turquoise waves. If I ask for a red triangle, your mind gets to drawing. And mom’s face? Of course.

You experience this differently, sure. Some of you see a photorealistic beach, others a shadowy cartoon. Some of you can make it up, others only “see” a beach they’ve visited. Some of you have to work harder to paint the canvas. Some of you can’t hang onto the canvas for long. But nearly all of you have a canvas.

I don’t. I have never visualized anything in my entire life. I can’t “see” my father’s face or a bouncing blue ball, my childhood bedroom or the run I went on ten minutes ago. I thought “counting sheep” was a metaphor. I’m 30 years old and I never knew a human could do any of this. And it is blowing my goddamned mind…

I opened my Facebook chat list and hunted green dots like Pac-Man. Any friend who happened to be online received what must’ve sounded like a hideous pick-up line at 2 o’clock in the morning:
—If I ask you to imagine a beach, how would you describe what happens in your mind?
—Uhh, I imagine a beach. What?
—Like, the idea of a beach. Right?
—Well, there are waves, sand. Umbrellas. It’s a relaxing picture. You okay?
—But it’s not actually a picture? There’s no visual component?
—Yes there is, in my mind. What the hell are you talking about?
—Is it in color?
—Yes…..
—How often do your thoughts have a visual element?
—A thousand times a day?
—Oh my God.

And so on. Read the whole thing, and this as well. How common is something like this? Judging by internet comments – very common, especially for other sensory modalities. I can visualize fine, though my imaginary ‘sense of place’ is probably stronger, but I cannot ‘imagine’ a taste or smell to save my life. I once went into a fancy cocktail bar and was asking the owner how he came up with the cocktails. He just thought about how two ingredients could taste together, he said, and then he combined them like that. Whoa, whoa, whoa, I said, you can imagine tastes? And combine them in your mind?

Who knows what others imagine in their mind? Does imagining a picture mean the same thing to different people? Is it vivid or faded, cartoonish or realistic?

When we do experiments with animals – how much are we relying on this supposed universality which, even among humans, is anything but?

Friday Fun: Science Combat

e52a7734335649.56cdf6d2294a8

Someone has combined pixel art, Mortal Kombat, and famous scientists. This is actually going to be a real video game released for Superinteressante magazine at some point. Until then, gaze in awe at the mighty combatants.

Brain Prize 2016

The Brain Prize, a thing I don’t think I knew existed, just gave $1,000,000 to three neuroscientists for their work on LTP. As with most prizes, the best part is the motivation to go back and read classic papers!

The best winner was Richard Morris because he kind of revolutionized the memory field with this figure:

Morris Water Maze

Yes, he created the Morris Water Maze, used to study learning and memory in a seemingly-infinite number of papers.

water maze 2

When was the last time you went back and actually read the original Morris Water Maze paper? I know I had not ever read it before today: but I should have.

No less important was the work of Timothy Bliss (and Terje Lomo, who did not win) illustrating the induction of LTP. Most of us have probably heard “neurons that fire together, wire together” and this is the first real illustration of the phenomenon (in 1973):

LTP induction

Bliss and Lomo were able to induce long-lasting changes in the strength of connections between two neurons by a “tetanic stimulation protocol“. The above figure is seared into my brain from my first year of graduate school, where Jeff Isaacson dragged us through paper after paper that used variations on this protocol to investigate the properties of LTP.

The final winner was Graham Collingridge who demonstrated that hippocampal LTP was induced via NMDA receptors. I don’t think this was the paper that demonstrated it, but I always found his 1986 paper on slow NMDA receptors quite beautiful:

NMDA LTP

Here, he has blocked NMDA receptors with APV and sees no spiking after repeated stimulation. However, when this blocker is washed out, you see spiking only after receiving several inputs because of the slow timescale of the receptors.

While historically powerful, the focus on NMDA receptors can be misleading. LTP can be induced in many different ways depending on the specific neural type and brain region! For my money, I have always been a fan of the more generalized form, STDP. Every neuroscientist should read and understand the Markram et al (1997) paper that demonstrates it and the Bi and Poo (1998) paper that has this gorgeous figure:

bi and poo

Read about the past, and remember where your science came from.

How to create neural circuit diagrams (updated)

nmeth.3777-F3

My diagrams are always a mess, but maybe I could start following this advice a little more carefully?

Diagrams of even simple circuits are often unnecessarily complex, making understanding brain connectivity maps difficult…Encoding several variables without sacrificing information, while still maintaining clarity, is a challenge. To do this, exclude extraneous variables—vary a graphical element only if it encodes something relevant, and do not encode any variables twice…

For neural circuits such as the brainstem auditory circuits, physical arrangement is a fundamental part of function. Another topology that is commonly necessary in neural circuit diagrams is the laminar organization of the cerebral cortex. When some parts of a circuit diagram are anatomically correct, readers may assume all aspects of the figure are similarly correct. For example, if cells are in their appropriate layers, one may assume that the path that one axon travels to reach another cell is also accurate. Be careful not to portray misleading information—draw edges clearly within or between layers, and always clearly communicate any uncertainty in the circuit.

Update: Andrew Giessel pointed me to this collection of blog posts from Nature Methods on how to visualize biological data more generally. Recommended!

#Cosyne2016, by the numbers

mostCosyne2016

Cosyne is the systems and computational neuroscience conference held every year in Salt Lake City and Snow Bird. It is a pretty good representation of the direction the community is heading…though given the falling acceptance rate you have to wonder how true that will stay especially for those on the ‘fringe’. But 2016 is in the air so it is time to update the Cosyne statistics.

I’m always curious about who is most active in any given year and this year it is Xiao-Jing Wang who I dub this year’s Hierarch of Cosyne. I always think of his work on decision-making and the speed-accuracy tradeoff. He has used some very nice modeling of small circuits to show how these tasks could be implemented in nervous systems. Glancing over his posters, though, and his work this year looks a bit more varied.

Still, it is nice to see such a large clump of people at the top: the distribution of posters is much flatter this year than previously which suggests a bit of

Here are the previous ‘leaders’:

  • 2004: L. Abbott/M. Meister
  • 2005: A. Zador
  • 2006: P. Dayan
  • 2007: L. Paninski
  • 2008: L. Paninski
  • 2009: J. Victor
  • 2010: A. Zador
  • 2011: L. Paninski
  • 2012: E. Simoncelli
  • 2013: J. Pillow/L. Abbott/L. Paninski
  • 2014: W. Gerstner
  • 2015: C. Brody
  • 2016: X. Wang

mostCosyneALL

If you look at the total number across all years, well, Liam Paninski is still massacring everyone else. At this rate, even if Pope Paninski doesn’t submit any abstracts over the next few years and anyone submits six per year… well it will be a good half a decade before he could possibly be dethroned.

The network diagram of co-authors is interesting, as usual. Here is the network diagram for 2016 (click for PDF):

cosyne2016

And the mess that is all-time Cosyne:

cosyneALL-straight

 

I was curious about this network. How connected is it? What is its dimensionality? If you look at the eigenvalues of the adjacency matrix, you get:

eigenvals

I put the first two eigenvectors at the bottom of this post, but suffice it to say the first eigenvector is basically Pouget vs. Latham! And the second is Pillow vs Paninski! So of course, I had to plot a few people in Pouget-Pillowspace:

pillowspace

(What does this tell us? God knows, but I find it kind of funny. Pillowspace.)

Finally, I took a bunch of abstracts and fed them through a Markov model to generate some prototypical Cosyne sentences. Here are abstracts that you can submit for next year:

  • Based on gap in the solution with tighter synchrony manifested both a dark noise [and] much more abstract grammatical rules.
  • Tuning curves should not be crucial for an approximate Bayesian inference which would shift in sensory information about alternatives
  • However that information about 1 dimensional latent state would voluntarily switch to odor input pathways.
  • We used in the inter vibrissa evoked responses to obtain time frequency use of power law in sensory perception such manageable pieces have been argued to simultaneously [shift] acoustic patterns to food reward to significantly shifted responses
  • We obtained a computational capacity that is purely visual that the visual information may allow ganglion cells [to] use inhibitory coupling as NMDA receptors, pg iii, Dynamical State University
  • Here we find that the drifting gratings represent the performance of the movement.
  • For example, competing perceptions thereby preserve the interactions between network modalities.
  • This modeling framework of goal changes uses [the] gamma distribution.
  • Computation and spontaneous activity at the other stimulus saliency is innocuous and their target location in cortex encodes the initiation.
  • It is known as the presentation of the forepaw target reaction times is described with low dimensional systems theory Laura Busse Andrea Benucci Matteo Carandini Smith-Kettlewell Eye Research.

Note: sorry about the small font size. This is normally a pet peeve of mine. I need to get access to Illustrator to fix it and will do so later…

The first two eigenvectors:

ev1 ev2

Punctuation in novels

Faulkner versus McCarthy

I found some beautiful posters that showed the punctation in different novels the other day. I was immediately curious if I could do something similar and wrote a little script (code here) to extract the punctation and print out a compressed representation of my favorite novels.

Then, like any proper scientist, I looked at the data and did some simple stats. Go see on Medium!

Here are a few of the (almost) full sets of punctuation from a couple of novels. For Pride And Prejudice, note the zoom-in versus the zoom-out:

zoom-out

zoom-in

 

Here are the files for Pride and Prejudice (alternate), A Doll’s House, and Romeo and Juliet.

Please let me know in the comments how totally I was wrong in my Medium analysis, and if there is anything you would like to see.

Update: Here a couple I thought were interesting. First, part of the Tractatus Logico Philosophicus:

tractatus

And then Ulysses, the difference between the beginning of the book (first) and the end of the book (second):

ulysses_beginning

ulysses_end

Posted in Art

The ballerina illusion

No matter how hard I try, I cannot get the spinning dancer illusion to flip. Looks like someone has found a much more powerful version of the illusion:

spinning-optical-illusion

What I especially like about this version of the illusion is I can totally get why it is happening. There aren’t great depth cues so there is a powerful prior that if you can see a face then that face is looking in your direction – hence direction flipping.

(via kottke)

Posted in Art