Quotes for the New Year

R.A. Fisher is the guest of honor to bring in the New Year:

The tendency of modern scientific teaching is to neglect the great books, to lay far too much stress upon relatively unimportant modern work, and to present masses of detail of doubtful truth and questionable weight in such a way as to obscure principles. ::: R.A. Fisher

and

The History of Science has suffered greatly from the use by teachers of second-hand material, and the consequent obliteration of the circumstances and the intellectual atmosphere in which the great discoveries of the past were made. A first-hand study is always instructive, and often … full of surprises. ::: R.A. Fisher

Instead of looking forward to the new year, I am going to make sure to spend some time understanding ideas from old years. I do not think I did enough of that in graduate school

These are from Vince Buffalo’s page of quotes.

How do you do your science?

I spend too much time thinking about what the best way is to do science. How should I structure my experiments if I want to maximize the likelihood that what I discover is both true and useful to other people? And how many different experiments do I need to do? Especially as a theoroexperimentalist.

The background philosophy of science chatter has picked up a bit over the last week, and I was spurred by something said on Noahpinion:

I don’t see why we should insist that any theory be testable. After all, most of the things people are doing in math departments aren’t testable, and no one complains about those, do they? I don’t see why it should matter if people are doing math in a math department, a physics department, or an econ department.

Also (via Vince Buffalo)

As a mathematical discipline travels far from its empirical source, or still more, if it is a second and third generation only indirectly inspired from ideas coming from ‘reality’, it is beset with very grave dangers. It becomes more and more purely aestheticizing, more and more purely l’art pour l’art. This need not be bad, if the field is surrounded by correlated subjects, which still have closer empirical connections, or if the discipline is under the influence of men with an exceptionally well-developed taste. But there is a grave danger that the subject will develop along the line of least resistance, that the stream, so far from its source, will separate into a multitude of insignificant branches, and that the discipline will become a disorganized mass of details and complexities. In other words, at a great distance from its empirical source, or after much ‘abstract’ inbreeding, a mathematical subject is in danger of degeneration. At the inception the style is usually classical; when it shows signs of becoming baroque the danger signal is up. It would be easy to give examples, to trace specific evolutions into the baroque and the very high baroque, but this would be too technical. In any event, whenever this stage is reached, the only remedy seems to me to be the rejuvenating return to the source: the reinjection of more or less directly empirical ideas. I am convinced that this is a necessary condition to conserve the freshness and the vitality of the subject, and that this will remain so in the future. ::: John von Neumann

Right before I left the Salk Institute, I was chatting with an older scientist and he said something along the following lines (paraphrasing):

The best science is done when you can identify a clear mechanism that you can test; [anonymized scientist] was known for having a lot of confirmatory evidence that pointed at some result, but nothing conclusive, no mechanism. Pretty much all of it ended up being wrong.

Basically, he was of the opinion that you can provide evidence of a direct mechanism, or you can provide evidence for a general idea that is consistent and points to that mechanism like so:

Experimental designWhere you should you collect your evidence? Each situation makes one of these arrows easier to collect than others.

But if you want to maximize your likelihood of making a lasting impact on knowledge, where do you want to place your bets? Can theories come before mechanism?

I don’t know.

 

No one will remember you because society doesn’t care

A few years ago I was in Washington DC and, being a bit of a tourist, I randomly picked up a fact card about one of our exciting presidents. Obviously the excitement mounted: who did I get? My best buddy LBJ? The notoriously rotund Taft? The Ur-President Washington? Nope, I got mighty Chester. A. Arthur! Wait, who?

I come from a family where History is important. Some of the first fatherly I got was that I should set my PIN number to something like the year of the Battle of Hastings because obviously that is easy to remember. He also likes to declaim that every educated person must surely know the year of the Norman Invasion. I can recite the Presidents back to Cleveland (though I sometimes forget Harding). I’m pretty sure my father can recite every president from Washington to the present day, in order.

And I had not the slightest idea that this Arthur guy ever even existed. I thought this card must be a joke until I pulled up Wikipedia and there he was (your trivia for the day: he first became President after Garfield was assassinated.)

Who remembers the presidents

Clearly I could remember the guy. But why him versus anyone else? Now that is socially determined. Roediger and DeSoto examined data from 1974, 1991, and 2009 that asked people to name who was president in which year. And what is interesting is that there is a very similar ‘forgetting’ curve: each generation generally remembers what is recent, and it drops off steeply after that.

But look at that tail! Look at how the baby boomers remember presidents some time back and then it just collapses. And Generation X is kind of similar – with a few more remembering Hoover, Roosevelt, Truman. And then Millenials have a fairly persistent memory up through Carter that the Boomers never would have had!

If you want more evidence that the Boomers have taken over pop culture and instilled their values as the important values in a way that previous generations didn’t – there it is. We remember their Presidents, not ours.

This is also pretty clear when participants are asked to freely recall Presidents. Which names do people know? Obviously, Washington, Jefferson, Lincoln, and Roosevelt are big ones. There are also bumps for John Quincy Adams and, surprisingly, Polk (!). But there is a persistent memory across generations for the Boomer presidents in a somewhat surprising way*.

And as history goes, so go some names. Today, no one remembers Filmore or Pierce, Arthur or Harding (whew). And we can quantitatively make forgetting curves to guess how long Presidents will be remembered. Kennedy will stick around but my man LBJ is soon to be unjustly forgotten. Such is life.

How long will the presidents be remembered

* Sorry I can’t make this quantitative; the “data” section of their supplemental methods appears to be missing…

References

Roediger, H., & DeSoto, K. (2014). Forgetting the presidents Science, 346 (6213), 1106-1109 DOI: 10.1126/science.1259627

A search for the science of the mind

More history of the scientists who wanted to understand the mind. Turns out, there was a lot of racism in early 20th century science – what a surprise.

He modelled the brain’s structure as though it was an archaeological site, the different levels supposedly reflecting evolutionary advances. The neocortex, shared by all mammals, controlled basic functions while the prefrontal area was the seat of more advanced abilities.

Investigating the perception of pain, Head had two cutaneous nerves on his left forearm severed. Every Friday for the next four years, he visited Rivers in his college rooms to chart the process of regeneration and the areas of acute sensitivity. Echoing Elliot Smith’s ideas about the evolutionary levels of the brain, Rivers and Head decided that the nervous system contained two layers: one older and more primitive; the other more subtle and localized. They speculated that the two systems “owed their origin to the developmental history of the nervous system. They reveal the means by which an imperfect organism has struggled towards improved functions and physical unity”. And this “could be seen as a metaphor for the triumph of civilization over savagery in human history”. Frederic Bartlett, a student of Rivers who went on to become a leading psychologist in the next generation, noted that this metaphor informed all Rivers’s later theories in physiology, psychology and anthropology. The structure of every human organ, every social institution, revealed cumulative layers of progressive development.

Psychology was looked down on by the Cambridge establishment, but Ludwig Wittgenstein was intrigued and regularly came to Mill Lane to work with Myers. “I had a discussion with Myers about the relations between Logic and Philosophy”, he wrote to Bertrand Russell. “I was very candid and I am sure he thinks that I am the most arrogant devil who ever lived . . . . I think he was a bit less confused after the discussion than before.” When the laboratory was opened to the public in 1913, Wittgenstein exhibited an apparatus for investigating the perception of rhythm. Perhaps influenced by Wittgenstein, Myers was moving away from biological determinism. The physiologists, he complained, “in their attempts to penetrate the reality of the known, were deliberately ignoring the knower”

And some more on the history of the word ‘scientist’ (see previously):

Carrington had noticed the spread of a particular term related to scientific research [it was “scientist”]. He himself felt the word was “not satisfactory,” and he wrote to eight prominent writers and men of science to ask if they considered it legitimate. Seven responded. Huxley and Argyll joined a five-to-two majority when they denounced the term. “I regard it with great dislike,” proclaimed Argyll. Huxley, exhibiting his usual gift for witty dismissals, said that the word in question “must be about as pleasing a word as ‘Electrocution.’”

…The English academic William Whewell first put the word “scientist” into print in 1834 in a review of Mary Somerville’s On the Connexion of the Physical Sciences.Whewell’s review argued that science was becoming fragmented, that chemists and mathematicians and physicists had less and less to do with one another. “A curious illustration of this result,” he wrote, “may be observed in the want of any name by which we can designate the students of the knowledge of the material world collectively.” He then proposed “scientist,” an analogue to “artist,” as the term that could provide linguistic unity to those studying the various branches of the sciences.

…“Scientist” met with a friendlier reception across the Atlantic. By the 1870s, “scientist” had replaced “man of science” in the United States. Interestingly, the term was embraced partly in order to distinguish the American “scientist,” a figure devoted to “pure” research, from the “professional,” who used scientific knowledge to pursue commercial gains…For most British readers, however, the popularity of the word in America was, if anything, evidence that the term was illegitimate and barbarous.

Notes on the word ‘scientist’

 

The term ‘scientist’ was invented only in 1833, by the polymath William Whewell, who gave it a faintly pejorative odour, drawing analogies to ‘journalist’, ‘sciolist’, ‘atheist’, and ‘tobacconist’. ‘Better die … than bestialise our tongue by such barbarisms,’ scowled the geologist Adam Sedgwick. ‘To anyone who respects the English language,’ said T H Huxley, ‘I think “Scientist” must be about as pleasing a word as “Electrocution”.’ These men preferred to call themselves ‘natural philosophers’ and there was a real distinction. Scientists were narrowly focused utilitarian data-grubbers; natural philosophers thought deeply and wrote elegantly about the moral, cosmological and metaphysical implications of their work…

Charles Babbage, in designing his ‘difference engine’, anticipated all the basic principles of the modern computer – including ‘garbage in, garbage out’. InReflections on the Decline of Science in England (1830) he accused his fellow scientists of routinely suppressing, concocting or cooking data. Such corruption (he confidently insisted) could be cleaned up if the government generously subsidised scientific research…

After his sketches of these forgotten bestsellers, Secord concludes with the literary bomb that blew them all up. In Sartor Resartus Thomas Carlyle fiercely deconstructed everything the popular scientists stood for. Where they were cool, rational, optimistic and supremely organised, he was frenzied, mystical, apocalyptic and deliberately nonsensical. They assumed that big data represented reality; he saw that it might be all pretence, fabrication, image – in a word, ‘clothes’. A century and a half before Microsoft’s emergence, Carlyle grasped the horror of universal digitisation: ‘Shall your Science proceed in the small chink-lighted, or even oil-lighted, underground workshop of Logic alone; and man’s mind become an Arithmetical Mill?’ That was a dig at the clockwork utilitarianism of both John Stuart Mill and Babbage: the latter called his central processing unit a ‘mill’.

Diffusers of useful knowledge. A review by Jonathan Rose on an excellent-sounding book, Visions of Science: Books and Readers at the Dawn of the Victorian Age.

Worms, nervous systems, and the beginning of neuroscience

Worms can distinguish between light and dark, and they generally stay underground, safe from predators, during daylight hours. They have no ears, but if they are deaf to aerial vibration, they are exceedingly sensitive to vibrations conducted through the earth, as might be generated by the footsteps of approaching animals. All of these sensations, Darwin noted, are transmitted to collections of nerve cells (he called them “the cerebral ganglia”) in the worm’s head.

“When a worm is suddenly illuminated,” Darwin wrote, it “dashes like a rabbit into its burrow.” He noted that he was “at first led to look at the action as a reflex one,” but then observed that this behavior could be modified—for instance, when a worm was otherwise engaged, it showed no withdrawal with sudden exposure to light.

For Darwin, the ability to modulate responses indicated “the presence of a mind of some kind.” He also wrote of the “mental qualities” of worms in relation to their plugging up their burrows, noting that “if worms are able to judge…having drawn an object close to the mouths of their burrows, how best to drag it in, they must acquire some notion of its general shape.” This moved him to argue that worms “deserve to be called intelligent, for they then act in nearly the same manner as a man under similar circumstances.”

Darwin was discussing the cerebral ganglia of worms in 1881. If you are particularly interested in worms or just plain masochistic, you can find a copy of the book here. It is somehow historically poetic that, by twists and turns, worms have become one of the foundational species of neuroscience research.

Yet it made me realize that I had no idea when the term ‘cerebral ganglia’ first began to be used. When did we realize that we had a ‘nervous system’? I will go into this more in a later post, but the concept began to be used in books around the year 1650 (which is consistent with other sources I have found). On the other hand, we didn’t understand that the neuron was a useful and separate unit until almost 1900!

neuroscience ngram