The Obama raise (for scientists)

My dad, being my dad, sent me an article claiming that Obama was about to change overtime rules so that more salaried workers will be covered. Would I get paid more? Psh, yeah right, I said. But then I looked a bit more closely and it wasn’t so clear. The new proposed rules state that anyone making under $50,400 $47,892 would be eligible for overtime (whereas previously the limit was a measly $24,000). That is: most postdocs will, technically, be eligible for overtime money if they work more than 40 hours per week.

So I decided to ask the Twitter hivemind and set off a predictable storm of WTF’s. The summary is: yes, it looks right now like postdocs will be eligible for overtime pay but there is a commentary period to propose exceptions to these rules (I don’t think graduate students will because they are “students”). No, no one thinks this will actually end up happening; somehow the NIH/NSF will make postdocs exempt from these rules (see a bit more here). Here are the full proposed rules. If you have opinions about these rules, please send in comments:

The Notice of Proposed Rulemaking (NPRM) published on July 6, 2015 in the Federal Register (80 FR 38515) and invited interested parties to submit written comments on the proposed rule atwww.regulations.gov on or before September 4, 2015. Only comments received during the comment period identified in the Federal Register published version of the NPRM will be considered part of the rulemaking record.

I was asked to do a storify of all the twitter responses but, uh, there were a lot of them and I wasn’t 100% paying attention. So here are some salient points:

  1. What are the job duties of a postdoc? Does going to a lecture count, or will that not count toward “work time” (if it does, do I get credit for reading a salient paper at home? At lab?)
  2. Is a fellow an employee, or are they different somehow? Is this technically a “salary”? This seems to be a point of confusion and came up repeatedly.
  3. calling PDs “trainees” while also claiming them as exempt “learned professionals” is a joke.’
  4. This may increase incentive to train PhDs and decrease incentive to hire postdocs (“For my lab, min PD cost would be $62k/yr, PhD cost $31k/yr all-in.”). Similarly, the influence may be most felt on small labs with less money, less on large “prestige” labs.

#1 is the most interesting question in general.

Functionally, if enforced at all (hmm), this would be functionally like a decrease in NIH/NSF/etc funding. But let’s face it, I think we can all agree that the most likely outcome here is an ‘exemption’ for postdocs and other scientists…

Edit: I meant to include this: currently in the overtime rules, there is a “learned professional” exemption that describes scientists – and is why they do not get overtime pay. In order to qualify for that exemption, there is some salary floor that they must make ($455 per week, or ~$23,660 per year). The new proposed rules will state:

In order to maintain the effectiveness of the salary level test, the Department proposes to set the standard salary level equal to the 40th percentile of earnings for full-time salaried workers ($921 per week, or $47,892 annually for a full-year worker, in 2013)

The NIH paylines are currently at $42,480 for a first year postdoc, increasing yearly, and passing this threshold in year 4. The fastest way to avoid overtime rules would be to simply bump up the first year salary to $47,893.

Because brains are packed with knowledge and are yummy, that’s why

Creamy brains

“Mr. Sheep Man,” I asked, “why would that old man want to eat my brains?”

“Because brains packed with knowledge are yummy, that’s why. They’re nice and creamy. And sort of grainy at the same time.”

“So that’s why he wants me to spend a month cramming information in there, to suck it up afterward?”

“That’s the idea.”

“Don’t you think that’s awfully cruel?” I asked. “Speaking from the suckee’s point of view, of course.”

“But, hey, this kind of thing’s going on in libraries everywhere, you know. More or less, that is.”

This news staggered me. “In libraries everywhere?” I stammered.

(The Strange Library, Haruki Murakami)

Oh hi! I am still alive, physically if not so much mentally. Research, fellowship applications, and the like got too much for me over the past few months. Hopefully I can resume my normal posting schedule? To keep all of your brains nice and creamy, of course.

Small autonomous drones

Nature has a fascinating review on drones – and especially microdrones!

microdrones

For those who don’t have access, here are some highlights (somewhat technical):

Propulsive efficiencies for rotorcraft degrade as the vehicle size is reduced; an indicator of the energetic challenges for flight at small scales. Smaller size typically implies lower Reynolds numbers, which in turn suggests an increased dominance of viscous forces, causing greater drag coefficients and reduced lift coefficients compared with larger aircraft. To put this into perspective, this means that a scaled-down fixed-wing aircraft would be subject to a lower lift-to-drag ratio and thereby require greater relative forward velocity to maintain flight, with the associated drag and power penalty reducing the overall energetic efficiency. The impacts of scaling challenges (Fig. 3) are that smaller drones have less endurance, and that the overall flight times range from tens of seconds to tens of minutes — unfavourable compared with human-scale vehicles.

There are, however, manoeuvrability benefits that arise from decreased vehicle size. For example, the moment of inertia is a strong function of the vehicle’s characteristic dimension — a measure of a critical length of the vehicle, such as the chord length of a wing or length of a propeller in a similar manner as used in Reynolds number scaling. Because the moment of inertia of the vehicle scales with the characteristic dimension, L, raised to the fifth power, a decrease in size from a 11 m wingspan, four-seat aircraft such as the Cessna 172 to a 0.05 m rotor-to-rotor separation Blade Pico QX quadcopter implies that the Cessna has about 5 × 1011 the inertia of the quadcopter (with respect to roll)…This enhanced agility, often achieved at the expense of open-loop stability, requires increased emphasis on control — a challenge also exacerbated by the size, weight and power constraints of these small vehicles.

microdrone flight vs mass

 

Improvements in microdrones will come from becoming more insect-like and adapting knowledge from biological models:

 

In many situations, such as search and rescue, parcel delivery in confined spaces and environmental monitoring, it may be advantageous to combine aerial and terrestrial capabilities (multimodal drones). Perching mechanisms could allow drones to land on walls and power lines in order to monitor the environment from a high vantage point while saving energy. Agile drones could move on the ground by using legs in conjunction with retractable or flapping wings. In an effort to minimize the total cost of transport, which will be increased by the additional locomotion mode, these future drones may benefit from using the same actuation system for flight control and ground locomotion…

Many vision-based insect capabilities have been replicated with small drones. For example, it has been shown that small fixed-wing drones and helicopters can regulate their distance from the ground using ventral optic flow while a GPS was used to maintain constant speed and an IMU was used to regulate roll angle. The addition of lateral optic flow sensors also allowed a fixed-wing drone to detect near-ground obstacles. Optic flow has also been used to perform both collision-free navigation and altitude control of indoor and outdoor fixed-wing drones without a GPS. In these drones, the roll angle was regulated by optic flow in the horizontal direction and the pitch angle was regulated by optic flow in the vertical direction, while the ground speed was measured and maintained by wind-speed sensors. In this case, the rotational optic flow was minimized by flying along straight lines interrupted by short turns or was estimated with on-board gyroscopes and subtracted from the total optic flow, as suggested by biological models

John Nash, 1928 – 2015

Sad news that John Nash was killed yesterday when his taxi crashed on its way back from the airport. He and his wife were ejected from the taxi when it ran into the lane divider.

Nash is most famous from his biopic A Beautiful Mind though obviously it is his intellectual contributions that you should know about.

His 30 page PhD thesis was what won him the Nobel Prize. His work on game theory was influential not just in economics, but psychology and ecology among other fields.

Recently declassified letters to the NSA show how Nash was foundational to modern cryptography and its reliance on computational complexity. This is the description he included in his letter:

Nash-transmittingarrangement

When he was killed, he was returning from Norway where he received the Abel prize for work on nonlinear partial differential equations.

He continued to publish; his final paper (afaik) was “The agencies method for coalition formation in experimental games

He also maintained (?) a delightfully minimalist personal web page.

When did we start using information theory in neuroscience?

This question came up in journal club a little while ago.

The hypothesis that neurons in the brain are attempting to maximize their information about the world is a powerful one. Although usually attributed to Horace Barlow, the idea arose almost immediately after Shannon formalized his theory of information.

Remember, Shannon introduced information theory in 1948. Yet only four years later, MacKay and McCulloch (of the McCulloch-Pitts neuron!) published an article analyzing neural coding from the perspective of information theory. By assuming that a neuron is a communication channel, they wanted to understand what is the best ‘code’ for a neuron to use – a question which was already controversial in the field (it seems as if the dead will never die…). Specifically, they wanted to compare whether the occurrence of a spike was the informative signal or whether it was the time since the previous spike. They found, based on information theory, that it is the interval from the previous spike that can signal the most information.

And for those who want to break into the analog vs digital coding they have this to say:

nor is it our purpose in the following investigation to reopen the “analogical versus digital” question, which we believe to represent an unphysiological antithesis. The statistical nature of nervous activity must preclude anything approaching a realization in practice of the potential information capacity of either mechanism, and in our view the facts available are inadequate to justify detailed theorization at the present time

Around the same time, Von Neumann – of course it would be Von Neumann! – delivered a series of lectures analyzing coding from the perspective of idealized neurons of the McCulloch-Pitts variety. Given that these were lectures around the time of the publication of the work in the preceding paragraph, I am guessing that he knew of their work – but maybe not!

In 1954, Attneave looked at how visual perception is affected by information and the redundancy in the signal. He provides by far the most readable paper of the bunch. Here is the opening:

In this paper I shall indicate some of the ways in which the concepts and techniques of information theory may clarify our understanding of visual perception. When we begin to consider perception as an information-handling process, it quickly becomes clear that much of the information received by any higher organism is redundant. Sensory events are highly interdependent in both space and time: if we know at a given moment the states of a limited number of receptors (i.e., whether they are firing or not firing), we can make better-than-chance inferences with respect to the prior and subsequent states of these receptors, and also with respect to the present, prior, and subsequent states of other receptors.

He also has this charming figure:

Attneave's cat

What Attneave’s Cat demonstrates is that most of the information in the visual image of the cat – the soft curves, the pink of the ears, the flexing of the claws – are totally irrelevant to the detection of the cat. All you need is a few points with straight lines connecting them, and this redundancy is surely what the nervous system is relying on.

Finally, in 1955 there was a summer research school thingamajig hosted by Shannon, Minsky, McCarthy and Rochester with this as one of the research goals:

1. Application of information theory concepts to computing machines and brain models. A basic problem in information theory is that of transmitting information reliably over a noisy channel. An analogous problem in computing machines is that of reliable computing using unreliable elements. This problem has been studies by von Neumann for Sheffer stroke elements and by Shannon and Moore for relays; but there are still many open questions. The problem for several elements, the development of concepts similar to channel capacity, the sharper analysis of upper and lower bounds on the required redundancy, etc. are among the important issues. Another question deals with the theory of information networks where information flows in many closed loops (as contrasted with the simple one-way channel usually considered in communication theory). Questions of delay become very important in the closed loop case, and a whole new approach seems necessary. This would probably involve concepts such as partial entropies when a part of the past history of a message ensemble is known.

Shannon of course tried to have is cake and eat it too by warning of the dangers of misused information theory. If you are interested in more on the topic, Dimitrov, Lazar and Victor have a great review.

So there you go – it is arguably MacKay, McCulloch, Von Neumann, and Attneave who are the progenitors of Information Theory in Neuroscience.

References

Attneave, F. (1954). Some informational aspects of visual perception. Psychological Review, 61 (3), 183-193 DOI: 10.1037/h0054663

Dimitrov, A., Lazar, A., & Victor, J. (2011). Information theory in neuroscience Journal of Computational Neuroscience, 30 (1), 1-5 DOI: 10.1007/s10827-011-0314-3

MacKay, D., & McCulloch, W. (1952). The limiting information capacity of a neuronal link The Bulletin of Mathematical Biophysics, 14 (2), 127-135 DOI: 10.1007/BF02477711

von Neumann (1956). Probabilistic logics and the synthesis of reliable organisms from unreliable components Automata Studies

Unrelated to all that, 5/15 edition

Is math in economics just sleight of hand?

http://www.docdroid.net/10gny/aer2ep20151066.pdf.html

via Noah Smith/Justin Wolfers

Welcome to the analome

I’ll let you guess what this is an -ome of…

The National Geographic Photo Contest

walks on water bolivia

The Great Pheasant Mating Dance

Birds, people, insects, we are all the same

The importance of penis length (in an insect)

importance of penis length

I’ll just leave this here.

The two scientific cultures: publications or citations?

I would much rather graduate with three papers cited twenty times each than twenty papers cited three times each.*

That fact drives how I do think about publishing my results:

If I wanted to published the maximum number of papers per dataset, I’d be worried about including too much data in any given paper because, once it was published other researchers might take that data and do the same analyses I was planning to do in a followup paper.

If I want my paper to be cited as much as possible though the opposite is true. I WANT my data to be as useful and accessible as possible because it will increase the number of other groups who will use that data, and cite my work when they publish their next paper.

In neuroscience, the “high prestige” positions are three papers cited twenty times; I am not sure if that is good.

Fishes escape from sharks…sometimes

A cover shows fish escape waves from sharks

fish escape waves

 

But sometimes they’re busy doing other things (mating) –

no fish escape waves

(via Johann Mourier)

DREADD users blog

Lots of good stuff on this blog! Check it out if you even have a passing interest in DREADDs.

The Evolution of Popular Music: USA 1960-2010

musical revolutions

There have been three music revolutions since 1960: in 1963, 1982, and 1991

Logothetis, animal rights extremists, and support

While I was on an accidental blogging sabbatical, Nikos Logothetis stopped his work on non-human primates because of pressure from animal rights groups:

Logothetis’s research on the neural mechanisms of perception and object recognition has used rhesus macaques with electrode probes implanted in their brains. The work was the subject of a broadcast on German national television in September that showed footage filmed by an undercover animal rights activist working at the institute. The video purported to show animals being mistreated.

Logothetis has said the footage is inaccurate, presenting a rare emergency situation following surgery as typical and showing stress behaviors deliberately prompted by the undercover caregiver. (His written rebuttal is here.) The broadcast triggered protests, however, and it prompted several investigations of animal care practices at the institute. Investigations by the Max Planck Society and animal protection authorities in the state of Baden-Württemberg found no serious violations of animal care rules. A third investigation by local Tübingen authorities that led to a police raid at the institute in late January is still ongoing.

Although this has been covered well elsewhere, I figured it was worth posting because it has seemed to disappear into the ether of conversation. It’s just last week’s news! But the effects of are long-lasting. The Center for Integrative Neuroscience, where Logothetis works, has a motion for solidarity which you should take a moment to sign.

His most-cited paper used monkeys to compare local field potentials (neural electrical activity) and fMRI BOLD signals. Here are two relevant figures comparing the two:

logothetis-1logothetis-2

He has many good papers studying vision. He also tried studying consciousness using vision once upon a time. So there’s that.

Karl Deisseroth’s New Yorker profile

The New Yorker has profiled Karl Deisseroth. I liked this paragraph which is an excellent description of his personality:

The Stanford neuroscientist Rob Malenka, who oversaw Deisseroth’s postdoctoral work, told me that in some ways he underestimated his trainee. “I knew he was really smart. I didn’t appreciate that underneath that laid-back, almost surfer-dude kind of persona is this intense creative and intellectual drive, this intense passion for discovery. He almost hides it by his presentation.”

I did not know this; let’s hope it is better than Ramon y Cajal’s science fiction:

His initial dream, in fact, was to write. He took writing courses as an undergraduate, and when he was a graduate student in both medicine and neuroscience at Stanford he took a fiction-writing class that met two nights a week at a junior college nearby. He remains an avid reader of fiction and poetry, and he is polishing a book of short stories and essays loosely inspired by Primo Levi’s “The Periodic Table.”

We are bombarded with the ‘genius’ and ‘superhuman that needs no sleep’ myths so much that it is worthwhile to see the New Yorker nix that one:

The doubts only motivated Deisseroth. “I felt a sort of personal need to see what was possible,” he says. Malenka told me that this understates the case considerably: “There’s this drive of, like, ‘You think I’m wrong about this, motherfucker? I’m going to show you I was right.’ ” Deisseroth began working furiously. “He was getting up at 4 or 5 A.M. and going to bed at one or two,” Monje says. He kept up this schedule for five years, until optogenetic experiments began working smoothly. “There are people who don’t need as much sleep,” Monje says. “Karl is not one of those people. He’s just that driven.”

But of course this is the best paragraph. I am guessing Deisseroth’s wife still doesn’t quite know understand what she’s dealing with (because it’s so strange):

Deisseroth estimates that optogenetics is now being used in more than a thousand laboratories worldwide, and he takes twenty minutes every Monday morning to sift through written requests for the opsins. It was not until Monje joined her husband at a recent neuroscience conference in Washington, D.C., that she understood the fame that optogenetics had brought him. “People were stopping us at the airport asking to take a picture with him, asking for autographs,” she said. “He can’t walk through the conference hall—there’s a mob. It’s like Beatlemania. I realized, I’m married to a Beatle. The nerdy Beatle.”

I hosted Karl Deisseroth when he visited UCSD last year. He struck me as very humble yet ambitious. Many ‘famous’ researchers come across as a bit airy when they speak of future research, but Deisseroth was very serious about the strengths and weaknesses of everything he did. The most interesting thing that he said was in response to a question about his papers getting a zillion citations. He claimed that it made them work more slowly and carefully; that they published less than they could have because, instead of needing to be 95% certain that what did was correct, they needed to be 99.9% certain. Everything they publish will be put under a microscope (so to speak).

The future ecology of stock traders

I am beyond fascinated by the interactions between competing intelligences that exist in the stock market. It is a bizarre mishmash of humans, AIs, and both (cyborgpeople?).

One recent strategy that exploits this interaction is ‘spoofing‘. The description from the link:

  • You place an order to sell a million widgets at $104.
  • You immediately place an order to buy 10 widgets at $101.
  • Everyone sees the million-widget order and is like, “Wow, lotta supply, the market is going down, better dump my widgets!”
  • So someone is happy to sell you 10 widgets for $101 each.
  • Then you immediately cancel your million-widget order, leaving you with 10 widgets for which you paid $1,010.
  • Then you place an order to buy a million widgets for $101, and another order to sell 10 widgets at $104.
  • Everyone sees the new million-widget order, and since no one has any attention span at all, they are like, “Wow, lotta demand, the market is going up, better buy some widgets!”
  • So someone is happy to buy 10 widgets from you for $104 each.
  • Then you immediately cancel your million-widget order, leaving you with no widgets, no orders and $30 in sweet sweet profits.

Amusingly enough, you don’t even need a fancy computer program for it – you can just hire a bunch of people who are really good at fast video games and they can click click click those keys fast enough for you.

Now some day trader living in his parent’s basement is accused of using this technique and causing the flash crash of 2010 (it possibly wasn’t him directly, but he could have caused some cascade that led to it).

I’m sitting here with popcorn, waiting to see how the ecosystem of varied intelligences evolves in competition with each other. Sounds like Wall Street needs to take some crash courses in ecology.