Deep learning and vision

Object recognition is hard. Famously, an attempt to use computers to automatically identify tanks in photos in the 1980s failed in a clever way:

But the scientists were worried: had it actually found a way to recognize if there was a tank in the photo, or had it merely memorized which photos had tanks and which did not? This is a big problem with neural networks, after they have been trained you have no idea how they arrive at their answers, they just do. The question was did it understand the concept of tanks vs. no tanks, or had it merely memorized the answers? So the scientists took out the photos they had been keeping in the vault and fed them through the computer. The computer had never seen these photos before — this would be the big test. To their immense relief the neural net correctly identified each photo as either having a tank or not having one…

Eventually someone noticed that in the original set of 200 photos, all the images with tanks had been taken on a cloudy day while all the images without tanks had been taken on a sunny day. The neural network had been asked to separate the two groups of photos and it had chosen the most obvious way to do it – not by looking for a camouflaged tank hiding behind a tree, but merely by looking at the colour of the sky. The military was now the proud owner of a multi-million dollar mainframe computer that could tell you if it was sunny or not.

But Deep Learning – and huge data sets – have propelled a huge breakthrough over the last few years:

Today, Olga Russakovsky at Stanford University in California and a few pals review the history of this competition and say that in retrospect, SuperVision’s comprehensive victory was a turning point for machine vision. Since then, they say, machine vision has improved at such a rapid pace that today it rivals human accuracy for the first time. [NE: I don't think this is quite true...]

Convolutional neural networks consist of several layers of small neuron collections that each look at small portions of an image. The results from all the collections in a layer are made to overlap to create a representation of the entire image. The layer below then repeats this process on the new image representation, allowing the system to learn about the makeup of the image.

An interesting question is how the top algorithms compare with humans when it comes to object recognition. Russakovsky and co have compared humans against machines and their conclusion seems inevitable. “Our results indicate that a trained human annotator is capable of outperforming the best model (GoogLeNet) by approximately 1.7%,” they say…But the trend is clear. “It is clear that humans will soon outperform state-of-the-art image classification models only by use of significant effort, expertise, and time,” say Russakovsky and co.

What is intelligence?

You may have heard that a recent GWAS study found three genes for heritable intelligence, though with tiny effects. There was a great quote in a Nature News article on the topic:

“We haven’t found nothing,” he says.

Yeah, you don’t want that to be your money quote.

Kevin Mitchell has been tweeting about the study – I hope he storifies it! – and linked to an old post of his suggesting that the genetics of intelligence are really the genetics of stupidity: it’s not that these genes are making you smarter, but that they’re making you less dumb (as I gather, a lot of evidence suggests that ‘intelligence’ is related to overall health.)

Anyway, the SNPs that the GWAS identified are in KNCMA1, NRXN1, POU2F3, and SCRT which all are involved in glutamate neurotransmission. This is always troubling to my tiny brain, because I never understand quite how ‘intelligence’ works. People like to think that is some kind of learning, so if we can just learn better we’ll be smarter. And that’s what the authors of the article hint at.

But how does that even make sense? Learning faster is, in a way, like being hyperreactive to the world. There’s a reason that overlearning is a problem in machine learning! There is an optimal level of learning that, presumably, evolution has stuck us with. So is the supposition that we’re overreactive to conforming to stimuli in the world something that is good? Or is it that the modern world favors it whereas historically it would not have? Or what?

Yet another programming note

I’m in the midst of some intense experiments and don’t really have time for writing or thinking for the next week or two, which is why things have slowed down…

I’ll try to post snippets of articles I find interesting – not that I have much time for reading – or very brief thoughts on relevant articles that I scan. Expect the blog to be even more glib than usual! I’ll try to keep it semi-active, though.

Unrelated to all that, 9/5 edition


FEINSTEIN: Hey, there’s something else you should know, Johnson. He’s not just with the graduate program. He’s an MD/PhD student.

JOHNSON: his mouth hangs open in rage He’s with the medical school? There’s no way I’m working with him now! We’ve been fighting with them over jurisdiction on this research project for months now. Remember that axon-pathfinding paper we almost had published before it was stolen by those patient-coddlers? We don’t need a future stethoscope-pusher hanging around our lab.

The phone rings, and FEINSTEIN answers it.

FEINSTEIN: Yeah? Really? Glutamatergic and GABAergic neurotransmitter vesicles? All right. I’ve got the perfect guys for the job. [He hangs up phone.]

FEINSTEIN: My inside man over at Nature Neuroscience tells me that those pseudo-scientists over at that Hopkins lab are trying to scoop us on your paper, Johnson. My pal is going to work over their submitted manuscript with some nasty reviews, but I’m going to need you two to get down to your bench and get me some electron microscopy images so we take ’em out.

How to Get Into an Ivy League College—Guaranteed

Ma’s algorithm, for example, predicts that a U.S.-born high school senior with a 3.8 GPA, an SAT score of 2,000 (out of 2,400), moderate leadership credentials, and 800 hours of extracurricular activities, has a 20.4 percent chance of admission to New York University and a 28.1 percent shot at the University of Southern California. Those odds determine the fee ThinkTank charges that student for its guaranteed consulting package: $25,931 to apply to NYU and $18,826 for USC. “Of course we set limits on who we’ll guarantee,” says Ma. “We don’t want to make this a casino game.”

Continue reading

Greg Dunn’s neural landscape


Every so often these fantastic neural paintings by Greg Dunn get passed around. I never wondered about the backstory until now:

My artistic career began during my tenure as a graduate student in neuroscience at the University of Pennsylvania. As I came to learn, molecular research can be an existential exercise in that you must rely on machines and chemical reagents to “see” your experiments. Painting provided me a welcome respite from lab frustrations because it gave me a sense of control. When painting, I can experiment and immediately see the result, judge it against my intentions, and iterate as necessary. I can convey my thoughts to the world without having to worry about grants, contaminated compounds, the politics of publishing, or an unexpected flood in the mouse room threatening to wash away my study subjects.

…My graduate school days were filled with stunning microscopic imagery. Neurons, in particular, resonated with me. With their chaotic, unpredictable branching patterns, neurons have much in common aesthetically with traditional subjects of Chinese, Japanese, and Korean ink painting, such as trees and branches. Viewed as landscapes, neuronal vistas would fit easily within an Asian context. I began to experiment with merging the two.

From American Scientist

An “interview” with Laboratory Life

Note: Reading Latour’s “Laboratory Life”, I found that there were too many great quotes to summarize. Having once been chastened by a High School teacher that “when there is a great writer, you should let them have their voice”, I thought that a Q&A with the book would be a suitable way to get the ideas across. All answers are quotes from the book in one way or another. This is not meant to be an introduction to Laboratory Life but rather a series of quotes that I found particularly interesting.

Q: You are a book about how facts – or should I say, “facts” – are constructed in science. In order to understand how that happens, you spent some time in Roger Guillemain’s neuroendocrinology laboratory at the Salk Institute. How would you describe – anthropologically – what you saw there?

Firstly, at the end of each day, technicians bring piles of documents from the bench space through to the office space. In a factory we might expect these to be reports of what has been processed and manufactured. For members of this laboratory, however, these documents constitute what is yet to be processed and manufactured. Secondly, secretaries post off papers from the laboratory at an average rate of one every ten days. However, far from being reports of what has been produced in the factory, members take these papers to be the product of their unusual factory.

By dividing the annual budget of the laboratory by the number of articles published (and at the same time discounting those articles in the laymen’s genre), our observer calculated that the cost of producing a paper was $60,000 in 1975 and $30,000 in 1976 (ed: approximately $260,000 and $123,000, respectively, in 2013 terms). Clearly, papers were an expensive commodity!

Moreover, nearly all the peptides (90 percent) are manufactured for internal consumption and are not available as output. The actual output (for example, 3.2 grams in 1976) is potentially worth $130,000 at market value, and although it cost only $30,000 to produce, samples are sent free of charge to outside researchers who have been able to convince one of the members of the laboratory that his or her research is of interest.

Q: Then it seems like if one were an outsider observing a laboratory, it was the papers that were important, not the experiments. How did the scientists react when they heard this?

Indeed, our observer incurred the considerable anger of members of the laboratory, who resented their representation as participants in some literary activity. In the first place, this failed to distinguish them from any other writers. Secondly, they felt that the important point was that they were writing about something, and that this something was “neuroendocrinology.” They claimed merely to be scientists discovering facts; [I] doggedly argued that they were writers and readers in the business of being convinced and convincing others.

Q: If the work of science is fundamentally literary, it must have a set of precursors – myths, legends, etc – that it draws on. Could you illustrate that somehow?

Neuroendocrinology seemed to have all the attributes of a mythology: it had had its precursors, its mythical founders, and its revolutions. In its simplest version, the mythology goes as follows: After World War II it was realised that nerve cells could also secrete hormones and that there is no nerve connection between brain and pituitary to bridge the gap between the central nervous system and the hormonal system. A competing perspective, designated the “hormonal feedback model” was roundly defeated after a long struggle by participants who are now regarded as veterans. As in many mythological versions of the scientific past, the struggle is now formulated in terms of a fight between abstract entities such as models and ideas. Consequently, present research appears based on one particular conceptual event, the explanation of which only merits scant elaboration by scientists. The following is a typical account: “In the 1950s there was a sudden crystallization of ideas, whereby a number of scattered and apparently unconnected results suddenly made sense and were intensely gathered and reviewed.”

However, the mythology of its development is very rarely mentioned in the course of the day-to-day activities of members of the laboratory. The beliefs that are central to the mythology are noncontroversial and taken for granted, and only enjoy discussion during the brief guided tours of the laboratory provided for visiting laymen. In the setting, it is difficult to determine whether the mythology is never alluded to simply because it is a remote and unimportant remnant of the past or because it is now a well-known and generally accepted item of folklore.

Q: Okay, but most scientists would say that they spend their time performing experiments in order to establish facts. 

Let us start with the concept of noise. Information is a relation of probability; the more a statement differs from what is expected, the more information it contains. It follows that a central question for any participant advocating a statement in the field is how many alternative statements are equally probable. If a large number can easily be thought of, the original statement will be taken as meaningless and hardly distinguishable from others. If the others seem much less likely than the original statement, the latter will stand out and be taken as a meaningful contribution.

The whole series of transformations, between the rats from which samples are initially extracted and the curve which finally apears in publication, involves an enormous quantity of sophisticated apparatus. By contrast with the expense and bulk of this apparatus, the end product is no more than a curve, a diagram, or a table of figures written on a frail sheet of paper. It is this document, however, which is scrutinised by participants for its “significance” and which is used as “evidence” in part of an argument or in an article. Thus, the main upshot of the prolonged series of transformations is a document which, as will become clear, is a crucial resource.

Instead of a “nice curve,” it is all too easy to obtain a chaotic scattering of random points of curves which cannot be replicated. Every curve is surrounded by a flow of disorder, and is only saved from dissolution because everything is written or routinised in such a way that a point cannot as well be in any place of the log paper. The investigator placed a premium on those effects which were recordable; the data were cleaned up so as to produce peaks which were clearly discernible from the background; and, finally, the resulting figures were used as sources of persuasion in an argument.

It was obvious to our observer, however, that everything taken as self-evident in the laboratory was likely to have been the subject of some dispute in earlier papers.

Q: Could you describe the types of facts?

Statements corresponding to a taken-for- granted fact were denoted type 5 statements. Precisely because they were taken for granted, our observer found that such statements rarely featured in discussions between laboratory members. The greater the ignorance of a newcomer, the deeper the informant was required to delve into layers of implicit knowledge, and the farther into the past. Beyond a certain point, persistent questioning by the newcomer about “things that everybody knew” was regarded as socially inept.

More commonly, type 4 statements formed part of the accepted knowledge disseminated through teaching texts. It is, by contrast with type 5 statements, made explicit. This type of statement is often taken as the prototype of scientific assertion.

Many type 3 statements were found in review discussions and are of the form, “A has a certain relationship with B.” By deleting modalities from type 3 statements it is possible to obtain type 4 statements. For instance, “Oxytocin is generally assumed to be produced by the neurosecretory cells of the paraventricular nuclei.”

Type 2 statements could be identified as containing modalities which draw attention to the generality of available evidence (or the lack of it). For example: “There is a large body of evidence to support the concept of a control of the pituitary by the brain.”

Type 1 statements comprise conjectures or speculations (about a relationship) which appear most commonly at the end of papers.

It would follow that changes in statement type would correspond to changes in fact-like status. For example, the deletion of modalities in a type 3 statement would leave a type 4 statement, whose facticity would be correspondingly enhanced.

Q: Okay, let’s take this metaphor seriously. If the purpose of a laboratory is to construct papers for the purpose of persuasion – why does a scientist do this?

It is true that a good deal of laboratory conversations included mention of the term credit. The observers’ notebooks reveal the almost daily reference to the distribution of credit. It was a commodity which could be exchanged. The beginning of a scientist’s career entails a series of decisions by which individuals gradually accumulate a stock of credentials. These credentials correspond to the evaluation by others of possible future investments in that scientist. The investments have an enormous payoff both because of a concentration of credit in the institute and because of a high demand for credible information in the field. In terms of his pursuit of reward, his career makes little sense; as an investor of credibility it has been very successful.

For example, a successful investment might mean that people phone him, his abstracts are accepted, others show interest in his work, he is believed more easily and listened to with greater attention, he is offered better positions, his assays work well, data flow more reliably and form a more credible picture. The objective of market activity is to extend and speed up the credibility cycle as a whole. Those unfamiliar with daily scientific activity will find this portrayal of scientific activity strange unless they realise that only rarely is information itself “bought.” Rather, the object of “purchase” is the scientist’s ability to produce some sort of information in the future. The relationship between scientists is more like that between small corporations than that between a grocer and his customer.

Another key feature of the hierarchy is the extent to which people are regarded as replaceable. When, for example, a participant talks about leaving the group, he often expresses concern about the fate of the antiseras, fractions, and samples for which he has been responsible. It is these, together with the papers he has produced, that represent the riches needed by a participant to enable him to settle elsewhere and write further papers. Since the value of information is thought to depend on its originality, the higher a participant in the hierarchy the less replaceable he is thought to be.

Q: Credit is important because it means that the science is deemed more credible. Talk about how this affects the perception of the science.

For instance, the standing of one scientist might be such that when he defines a problem as important, no one feels able to object that it is a trivial question; consequently, the field may be moulded around this important question, and funds will be readily forthcoming. One review specified fourteen criteria which had to be met before the existence of a new releasing factor could be accepted. These criteria were so stringent that only a few signals could be distinguished from the background noise. This, in turn, meant that most previous literature had to be dismissed. By increasing the material and intellectual requirements, the number of competitors was reduced. 

Whether or not the number and quality of inscriptions constituted a proof depended on negotiations between members. Let’s say that Wilson wants to know the basis for the claim that the peptides have no activity when injected intravenously, so that they can counter any possible objections to their argument. At first sight, a Popperian might be delighted by Flower’s response. It is clear, however, that the question does not simply hinge on the presence or absence of evidence. Rather Flower’s comment shows that it depends on what they choose to accept as negative evidence. For him, the issue is a practical question. This example demonstrates that the logic of deduction cannot be isolated from its sociological grounds.

Similarly, a colleague’s claim was dismissed by showing an almost perfect fit between CRF, an important and long sought-after releasing factor, and a piece of haemoglobin, a relatively trivial protein. The dismissal effect is heightened by the creation of a link between his recent claim and the well-known blunder which the same colleague had committed a few years earlier

They appear to have developed considerable skills in setting up devices which can pin down elusive figures, traces, or inscriptions in their craftwork, and in the art of persuasion. The latter skill enables them to convince others that what they do is important, that what they say is true, and that their proposals are worth funding. They are so skillful, indeed, that they manage to convince others not that they are being convinced but that they are simply following a consistent line of interpretation of available evidence.

Q: If you could summarize everything, how would you do it?

Our argument has one central feature: the set of statements considered too costly to modify constitute what is referred to as reality.

The result of the construction of a fact is that it appears unconstructed by anyone; the result of rhetorical persuasion in the agnostic field is that participants are convinced that they have not been convinced; the result of materialisation is that people can swear that material considerations are only minor components of the “thought process”; the result of the investments of credibility, is that participants can claim that economics and beliefs are in no way related to the solidity of science; as to the circumstances, they simply vanish from accounts, being better left to political analysis than to an appreciation of the hard and solid world of facts!

By being sufficiently convincing, people will stop raising objections altogether, and the statement will move toward a fact-like status. Instead of being a figment of one’s imagination (subjective), it will become a “real objective thing,” the existence of which is beyond doubt.

Unrelated to all that, 8/30 edition

Humanity’s Longest Lasting Legacy May Be The Miles Of Holes We’ve Dug

The distance to the center of the Earth is roughly 3,960 miles (6,373 kilometers). Animal life stops 1.2 miles (2 km) below the surface — the depth where miners discovered deep-dwelling worms in South African gold mines. All known microbial life stops at a depth of around 1.7 miles (2.7 km). But humans have left a permanent mark well beyond those depths, geologists say.

Humans’ first underground foray came during the Bronze Age, when people began digging shallow mines in search of flint and metals. The Industrial Revolution of the 1800s sent humans even deeper below the surface. Still, many of the disturbances, like water wells, sewage systems and subway lines, were relatively shallow and extended less than 330 feet (100 m) below the surface. Only after 1950, a period referred to as the “Great Acceleration” by some geologists, did humans really plunge below 330 feet, Zalasiewicz and his colleagues explained.

A Unified Theory

For the last half-century we’ve had a popular notion that our intellectual culture is sundered in two — the literary and the scientific. “The two cultures” is the bumper-sticker phrase for this view. It dates back to a hugely influential 1959 lecture, also published in book form that year, by C. P. Snow — “a moderately able research chemist who had become a successful novelist,” in the historian Lisa Jardine’s not very adulatory description. According to Snow, on one side were the humanists, on the other the scientists, and between them lay a shameful “gulf of mutual incomprehension.”

In the 21st century, the two cultures are still with us, but the fault lines have shifted. Plenty of people can talk about thermodynamics and Shakespeare with equal facility; for that matter, no one has ever explained the second law better than Tom Stoppard in “Arcadia” (“You cannot stir things apart”). You’re probably comfortable with scientific expressions like “litmus test.” The question now is, can you explain a hash table? A linked list? A bubble sort? Maybe you can write — but can you code?


Continue reading

The smell of rain: what is petrichor?

  1. a pleasant smell that frequently accompanies the first rain after a long period of warm, dry weather.
    “other than the petrichor emanating from the rapidly drying grass, there was not a trace of evidence that it had rained at all”

After the goofy first 40 seconds, this video lists two things that make up the ‘smell of rain': ozone and petrichor.

Petrichor is the decomposing plant matter that rain causes to erupt from the soil. This same substance is – supposedly – a signal to plants that the soil has been dry and will prevent seeds from sprouting.

But when I went looking for more scholarly information on petrichor I found it…practically non-existent. There were no articles that mentioned it in the 2000s. There was one article that mentioned it in the 1990s. There were a few articles that mentioned referred to a specific article from the 70s. In fact, the were only two original research articles that I could find investigating petrichor as a scientific concept, both by the same two authors: IJ Bear and RG Thomas. These are “Petrichor and Plant Growth” and “Genesis of Petrichor“. As far as I can tell, none of the research was followed up on or replicated though the idea has occasionally been taken up in other contexts.

Petrichor clearly exists as a smell, but it turns out that there is precious little knowledge of what it actually is.

[via Ed Yong]

The sound of silence

Sensory neurons receive input from the outside world and send these signals on to the rest of the nervous system. This makes the concept of ‘silence’ fairly intriguing: what happens when there is very little sensory signal for the rest of the brain to process? It is well known that, after a while, no sensory stimulation means massive hallucinations. But quiet – brief periods of weak or no relevant stimulus – is different:

In the mid 20th century, epidemiologists discovered correlations between high blood pressure and chronic noise sources like highways and airports. Later research seemed to link noise to increased rates of sleep loss, heart disease, and tinnitus. (It’s this line of research that hatched the 1960s-era notion of “noise pollution,” a name that implicitly refashions transitory noises as toxic and long-lasting.)

Sound waves vibrate the bones of the ear, which transmit movement to the snail-shaped cochlea. The cochlea converts physical vibrations into electrical signals that the brain receives. The body reacts immediately and powerfully to these signals, even in the middle of deep sleep. Neurophysiological research suggests that noises first activate the amygdalae, clusters of neurons located in the temporal lobes of the brain, associated with memory formation and emotion. The activation prompts an immediate release of stress hormones like cortisol. [neuroecology: really? all noises?]

…He found that the impacts of music could be read directly in the bloodstream, via changes in blood pressure, carbon dioxide, and circulation in the brain. (Bernardi and his son are both amateur musicians, and they wanted to explore a shared interest.) “During almost all sorts of music, there was a physiological change compatible with a condition of arousal,” he explains…But the more striking finding appeared between musical tracks. Bernardi and his colleagues discovered that randomly inserted stretches of silence also had a drastic effect, but in the opposite direction. In fact, two-minute silent pauses proved far more relaxing than either “relaxing” music or a longer silence played before the experiment started.

In light of this, I found the experiences of a hermit who has been living alone since the 1980s fascinating:

He explained about the lack of eye contact. “I’m not used to seeing people’s faces,” he said. “There’s too much information there. Aren’t you aware of it? Too much, too fast.” (Note: he may have asperger’s)

“But you must have thought about things,” I said. “About your life, about the human condition.”

Chris became surprisingly introspective. “I did examine myself,” he said. “Solitude did increase my perception. But here’s the tricky thing—when I applied my increased perception to myself, I lost my identity. With no audience, no one to perform for, I was just there. There was no need to define myself; I became irrelevant. The moon was the minute hand, the seasons the hour hand. I didn’t even have a name. I never felt lonely. To put it romantically: I was completely free.”

…”What I miss most,” he eventually continued, “is somewhere between quiet and solitude. What I miss most is stillness.” He said he’d watched for years as a shelf mushroom grew on the trunk of a Douglas fir in his camp. I’d noticed the mushroom when I visited—it was enormous—and he asked me with evident concern if anyone had knocked it down. I assured him it was still there. In the height of summer, he said, he’d sometimes sneak down to the lake at night. “I’d stretch out in the water, float on my back, and look at the stars.”