BRAIN Initiative announcements

The list of this round of BRAIN Initiative awards from the NIH has been released. MyCousinAmygdala – who may or may not actually be Krang –  made this word cloud (excluding words like “brain”). Score one for the public university system in California, right? But note also the lack of words like “cognition”. These are not all the awards, of course – there’s a lot of NSF awards as well as DARPA, though I can’t find a complete list of those.

But this should give a good idea of what the initiative is actually interested in (apparently, people named “John”.)

If you’re especially curious, there’s a livestream of the announcements and White House press conference today at 3:30 PM EST.

Advertisements

Why Einstein is so famous

Why did Einstein’s fame burn brighter than any other scientist’s? This article in the New Yorker from 1933 explains it:

The chief agent in making Einstein the idol of the masses was Carr V. Van Anda, the great editor of the period. Van Anda had a genius for compelling millions to make his hobbies their hobbies. Ten years ago, for example, the average American became an amateur Egyptologist because Van Anda was an amateur Egyptologist. Starting with a routine dispatch telling of some promising excavations at Thebes, Van Anda filled the New York Times with endless columns about the Egypt of three thousand years ago and made Tut-ankh-Amen a household word. It had long been one of Van Anda’s journalistic hunches that an unlooted tomb of ancient Egypt would be discovered some day; it was another of his hunches that a flaw might be found in the Newtonian theory. He had learned to read hieroglyphics and had devoted himself to higher mathematics, so that he was thoroughly prepared to exploit both hunches. On the announcement that the relativity theory had been confirmed, he opened all the floodgates of publicity on Einstein. The pages of the Times in Van Anda’s day had the same authority with the journalistic profession that the sacred gold plates had among the Mormons of Joseph Smith’s day. The entire American press was soon struggling to explain relativity; the country weeklies held Einstein orgies patterned after the Einstein orgies in the Times. The European press got much of its interest in relativity by way of America, and even today the name of Einstein retains on this side of the Atlantic its peculiar rallying power as the battle cry of the culture-seeker.

It also paints a delightfully goofy portrait of the man:

When a magazine offered him an amazing sum for an article, he rejected it contemptuously. “What?” he exclaimed. “Do they think I am a prizefighter?” But he finally wrote the article after arguing the magazine into cutting the price in half. It is said that he declined his present post at the Institute for Advanced Study at Princeton on the ground that the salary was preposterously munificent, and was persuaded to accept only by the promise of an enormous pay cut…

Presented by Lord Haldane to the Royal Society in England as a man of unparalleled intellectual boldness, Einstein found himself intimidated by the livery of the Haldane retainers. “He is too formidable,” said the Professor later at the Haldane place, when Mrs. Einstein wanted to summon the butler to fix a window…

The Professor and his wife were both bewildered by the barbaric hospitality which overwhelmed them on their earlier visits to [America]. They agreed that they must blindly accept whatever occurred to them in this bizarre republic; at a dinner in Cleveland, Mrs. Einstein, shrugging her shoulders at what appeared to be an elegant American eccentricity, ate a bouquet of orchids which she found on what seemed to be a salad plate.

What has neuroscience done for machine intelligence? (Updated)

Today on the twitters, Michael Hendricks asked, “Why do AI people bother with how animal brains work? Most good inventions work by doing things totally unlike how an animal would.”

The short answer is that animal brains can already solve the problems that AI researchers want to solve; so why not look into how they are accomplishing it?

The long answer is that in the end, the algorithms that we ultimately use may end up being dramatically different – but we need a starting point somewhere. By looking at some of the algorithms that have a neural inspiration, it is clear that by thinking about ideas of how the nervous system works machine learning/AI researchers can come up with clear solutions to their problems:

  1. Neural networks. In the 1940s and 50s, McCulloch, Pitts, and Hebb all contributed to modeling how a nervous system might work. In some sense, neural nets are trapped in this 1940s view of the nervous system; but why not? At an abstract level, it’s close…ish.
  2. Deep learning. Currently the Hot Shit in machine learning, these are like “neural networks 2.0”. Some quick history: traditionally, neural networks were done one layer at a time, with strict feedforward connectivity. One form of recurrent neural network proposed by Hopfield can be used to memorize patterns, or create ‘memories’. A variant on this, proposed by (computational neuroscientist) Terry Sejnowski and Geoff Hinton is the Boltzmann machine. If you combine multiple layers of Boltzmann machines with ideas from biological development, you get Deep Learning (and you publish it in the journal Neural Computation!).
  3. Independent Component Analysis. Although this story is possibly apocryphal, one of the earliest algorithms for computing ICA was developed – by Tony Bell and Terry Sejnowski (again) – by thinking about how neurons maximize their information about the physical world.
  4. Temporal difference learning. To quote from the Scholarpedia page: “This line of research work began with the exploration of Klopf’s 1972 idea of generalized reinforcement which emphasized the importance of sequentiality in a neuronal model of learning”

Additionally, companies like Qualcomm and the Brain Corporation are attempting to use ideas from spiking neural networks to make much more energy efficient devices.

In the other direction, neuroscientists can find that the brain appears to be implementing already-known ML algorithms (see this post on Nicole Rust). Many ideas and many biological specifics will be useless – but research is the hope of finding the tiny fraction of an idea that is useful to a new problem.

Updated:

Over on reddit, downtownslim offers two more examples:

Neocognitron was the foundation for the ConvNet. Fukushima came up with the model, LeCun figured out how to train it.

Support Vector Machines This last one is quite interesting, not many people outside the neural computation community know that Support Vector machines were influenced by the neural network community. They were originally called Support Vector Networks.

Dragonflies and helicopters?

 

Somehow I had always thought that dragonflies had inspired the design of the helicopter. However, I can’t seem to find any evidence for this and perhaps it is an urban legend. Here, instead, is an excellent history of the design of the helicopter:

Amongst his many elaborate drawings, the Renaissance visionary Leonardo da Vinci shows what is a basic human-carrying helicopterlike machine. His sketch of the “aerial-screw” or “air gyroscope” device is dated to 1483 but it was first published nearly three centuries later. (Da Vinci’s original drawing is MS 2173 of Manuscript (codex) B, folio 83 verso, in the collection of the Biblotheque L’Institute de France, Paris.) Da Vinci’s idea was an obvious elaboration of an Archimedes water-screw, but with keen insight to the problem of flight…

In the 1840s, another Englishman, Horatio Phillips, constructed a steam-driven vertical flight machine where steam generated by a miniature boiler was ejected out of the blade tips. Although impractical to build at full-scale, Phillips’s machine was significant in that it marked the first time that a model helicopter had flown under the power of an engine rather than stored energy devices such as wound-up springs…In the early 1860s, Ponton d’Amecourt of France flew a number of small steam-powered helicopter models. He called his machines helicopteres, which is a word derived from the Greek adjective “elikoeioas” meaning spiral or winding, and the noun “pteron” meaning feather or wing.

(Photo by Rovanto)

Deep learning and vision

Object recognition is hard. Famously, an attempt to use computers to automatically identify tanks in photos in the 1980s failed in a clever way:

But the scientists were worried: had it actually found a way to recognize if there was a tank in the photo, or had it merely memorized which photos had tanks and which did not? This is a big problem with neural networks, after they have been trained you have no idea how they arrive at their answers, they just do. The question was did it understand the concept of tanks vs. no tanks, or had it merely memorized the answers? So the scientists took out the photos they had been keeping in the vault and fed them through the computer. The computer had never seen these photos before — this would be the big test. To their immense relief the neural net correctly identified each photo as either having a tank or not having one…

Eventually someone noticed that in the original set of 200 photos, all the images with tanks had been taken on a cloudy day while all the images without tanks had been taken on a sunny day. The neural network had been asked to separate the two groups of photos and it had chosen the most obvious way to do it – not by looking for a camouflaged tank hiding behind a tree, but merely by looking at the colour of the sky. The military was now the proud owner of a multi-million dollar mainframe computer that could tell you if it was sunny or not.

But Deep Learning – and huge data sets – have propelled a huge breakthrough over the last few years:

Today, Olga Russakovsky at Stanford University in California and a few pals review the history of this competition and say that in retrospect, SuperVision’s comprehensive victory was a turning point for machine vision. Since then, they say, machine vision has improved at such a rapid pace that today it rivals human accuracy for the first time. [NE: I don’t think this is quite true…]

Convolutional neural networks consist of several layers of small neuron collections that each look at small portions of an image. The results from all the collections in a layer are made to overlap to create a representation of the entire image. The layer below then repeats this process on the new image representation, allowing the system to learn about the makeup of the image.

An interesting question is how the top algorithms compare with humans when it comes to object recognition. Russakovsky and co have compared humans against machines and their conclusion seems inevitable. “Our results indicate that a trained human annotator is capable of outperforming the best model (GoogLeNet) by approximately 1.7%,” they say…But the trend is clear. “It is clear that humans will soon outperform state-of-the-art image classification models only by use of significant effort, expertise, and time,” say Russakovsky and co.

What is intelligence?

You may have heard that a recent GWAS study found three genes for heritable intelligence, though with tiny effects. There was a great quote in a Nature News article on the topic:

“We haven’t found nothing,” he says.

Yeah, you don’t want that to be your money quote.

Kevin Mitchell has been tweeting about the study – I hope he storifies it! – and linked to an old post of his suggesting that the genetics of intelligence are really the genetics of stupidity: it’s not that these genes are making you smarter, but that they’re making you less dumb (as I gather, a lot of evidence suggests that ‘intelligence’ is related to overall health.)

Anyway, the SNPs that the GWAS identified are in KNCMA1, NRXN1, POU2F3, and SCRT which all are involved in glutamate neurotransmission. This is always troubling to my tiny brain, because I never understand quite how ‘intelligence’ works. People like to think that is some kind of learning, so if we can just learn better we’ll be smarter. And that’s what the authors of the article hint at.

But how does that even make sense? Learning faster is, in a way, like being hyperreactive to the world. There’s a reason that overlearning is a problem in machine learning! There is an optimal level of learning that, presumably, evolution has stuck us with. So is the supposition that we’re overreactive to conforming to stimuli in the world something that is good? Or is it that the modern world favors it whereas historically it would not have? Or what?

Yet another programming note

I’m in the midst of some intense experiments and don’t really have time for writing or thinking for the next week or two, which is why things have slowed down…

I’ll try to post snippets of articles I find interesting – not that I have much time for reading – or very brief thoughts on relevant articles that I scan. Expect the blog to be even more glib than usual! I’ll try to keep it semi-active, though.

Unrelated to all that, 9/5 edition

MY FIRST GRADUATE SCHOOL ROTATION, WRITTEN AS A BUDDY COP MOVIE

FEINSTEIN: Hey, there’s something else you should know, Johnson. He’s not just with the graduate program. He’s an MD/PhD student.

JOHNSON: his mouth hangs open in rage He’s with the medical school? There’s no way I’m working with him now! We’ve been fighting with them over jurisdiction on this research project for months now. Remember that axon-pathfinding paper we almost had published before it was stolen by those patient-coddlers? We don’t need a future stethoscope-pusher hanging around our lab.

The phone rings, and FEINSTEIN answers it.

FEINSTEIN: Yeah? Really? Glutamatergic and GABAergic neurotransmitter vesicles? All right. I’ve got the perfect guys for the job. [He hangs up phone.]

FEINSTEIN: My inside man over at Nature Neuroscience tells me that those pseudo-scientists over at that Hopkins lab are trying to scoop us on your paper, Johnson. My pal is going to work over their submitted manuscript with some nasty reviews, but I’m going to need you two to get down to your bench and get me some electron microscopy images so we take ’em out.

How to Get Into an Ivy League College—Guaranteed

Ma’s algorithm, for example, predicts that a U.S.-born high school senior with a 3.8 GPA, an SAT score of 2,000 (out of 2,400), moderate leadership credentials, and 800 hours of extracurricular activities, has a 20.4 percent chance of admission to New York University and a 28.1 percent shot at the University of Southern California. Those odds determine the fee ThinkTank charges that student for its guaranteed consulting package: $25,931 to apply to NYU and $18,826 for USC. “Of course we set limits on who we’ll guarantee,” says Ma. “We don’t want to make this a casino game.”

Continue reading

Greg Dunn’s neural landscape

Cortex-in-Metallic-Pastels-gold-leaf

Every so often these fantastic neural paintings by Greg Dunn get passed around. I never wondered about the backstory until now:

My artistic career began during my tenure as a graduate student in neuroscience at the University of Pennsylvania. As I came to learn, molecular research can be an existential exercise in that you must rely on machines and chemical reagents to “see” your experiments. Painting provided me a welcome respite from lab frustrations because it gave me a sense of control. When painting, I can experiment and immediately see the result, judge it against my intentions, and iterate as necessary. I can convey my thoughts to the world without having to worry about grants, contaminated compounds, the politics of publishing, or an unexpected flood in the mouse room threatening to wash away my study subjects.

…My graduate school days were filled with stunning microscopic imagery. Neurons, in particular, resonated with me. With their chaotic, unpredictable branching patterns, neurons have much in common aesthetically with traditional subjects of Chinese, Japanese, and Korean ink painting, such as trees and branches. Viewed as landscapes, neuronal vistas would fit easily within an Asian context. I began to experiment with merging the two.

From American Scientist

An “interview” with Laboratory Life

Note: Reading Latour’s “Laboratory Life”, I found that there were too many great quotes to summarize. Having once been chastened by a High School teacher that “when there is a great writer, you should let them have their voice”, I thought that a Q&A with the book would be a suitable way to get the ideas across. All answers are quotes from the book in one way or another. This is not meant to be an introduction to Laboratory Life but rather a series of quotes that I found particularly interesting.

Q: You are a book about how facts – or should I say, “facts” – are constructed in science. In order to understand how that happens, you spent some time in Roger Guillemain’s neuroendocrinology laboratory at the Salk Institute. How would you describe – anthropologically – what you saw there?

Firstly, at the end of each day, technicians bring piles of documents from the bench space through to the office space. In a factory we might expect these to be reports of what has been processed and manufactured. For members of this laboratory, however, these documents constitute what is yet to be processed and manufactured. Secondly, secretaries post off papers from the laboratory at an average rate of one every ten days. However, far from being reports of what has been produced in the factory, members take these papers to be the product of their unusual factory.

By dividing the annual budget of the laboratory by the number of articles published (and at the same time discounting those articles in the laymen’s genre), our observer calculated that the cost of producing a paper was $60,000 in 1975 and $30,000 in 1976 (ed: approximately $260,000 and $123,000, respectively, in 2013 terms). Clearly, papers were an expensive commodity!

Moreover, nearly all the peptides (90 percent) are manufactured for internal consumption and are not available as output. The actual output (for example, 3.2 grams in 1976) is potentially worth $130,000 at market value, and although it cost only $30,000 to produce, samples are sent free of charge to outside researchers who have been able to convince one of the members of the laboratory that his or her research is of interest.

Q: Then it seems like if one were an outsider observing a laboratory, it was the papers that were important, not the experiments. How did the scientists react when they heard this?

Indeed, our observer incurred the considerable anger of members of the laboratory, who resented their representation as participants in some literary activity. In the first place, this failed to distinguish them from any other writers. Secondly, they felt that the important point was that they were writing about something, and that this something was “neuroendocrinology.” They claimed merely to be scientists discovering facts; [I] doggedly argued that they were writers and readers in the business of being convinced and convincing others.

Q: If the work of science is fundamentally literary, it must have a set of precursors – myths, legends, etc – that it draws on. Could you illustrate that somehow?

Neuroendocrinology seemed to have all the attributes of a mythology: it had had its precursors, its mythical founders, and its revolutions. In its simplest version, the mythology goes as follows: After World War II it was realised that nerve cells could also secrete hormones and that there is no nerve connection between brain and pituitary to bridge the gap between the central nervous system and the hormonal system. A competing perspective, designated the “hormonal feedback model” was roundly defeated after a long struggle by participants who are now regarded as veterans. As in many mythological versions of the scientific past, the struggle is now formulated in terms of a fight between abstract entities such as models and ideas. Consequently, present research appears based on one particular conceptual event, the explanation of which only merits scant elaboration by scientists. The following is a typical account: “In the 1950s there was a sudden crystallization of ideas, whereby a number of scattered and apparently unconnected results suddenly made sense and were intensely gathered and reviewed.”

However, the mythology of its development is very rarely mentioned in the course of the day-to-day activities of members of the laboratory. The beliefs that are central to the mythology are noncontroversial and taken for granted, and only enjoy discussion during the brief guided tours of the laboratory provided for visiting laymen. In the setting, it is difficult to determine whether the mythology is never alluded to simply because it is a remote and unimportant remnant of the past or because it is now a well-known and generally accepted item of folklore.

Q: Okay, but most scientists would say that they spend their time performing experiments in order to establish facts. 

Let us start with the concept of noise. Information is a relation of probability; the more a statement differs from what is expected, the more information it contains. It follows that a central question for any participant advocating a statement in the field is how many alternative statements are equally probable. If a large number can easily be thought of, the original statement will be taken as meaningless and hardly distinguishable from others. If the others seem much less likely than the original statement, the latter will stand out and be taken as a meaningful contribution.

The whole series of transformations, between the rats from which samples are initially extracted and the curve which finally apears in publication, involves an enormous quantity of sophisticated apparatus. By contrast with the expense and bulk of this apparatus, the end product is no more than a curve, a diagram, or a table of figures written on a frail sheet of paper. It is this document, however, which is scrutinised by participants for its “significance” and which is used as “evidence” in part of an argument or in an article. Thus, the main upshot of the prolonged series of transformations is a document which, as will become clear, is a crucial resource.

Instead of a “nice curve,” it is all too easy to obtain a chaotic scattering of random points of curves which cannot be replicated. Every curve is surrounded by a flow of disorder, and is only saved from dissolution because everything is written or routinised in such a way that a point cannot as well be in any place of the log paper. The investigator placed a premium on those effects which were recordable; the data were cleaned up so as to produce peaks which were clearly discernible from the background; and, finally, the resulting figures were used as sources of persuasion in an argument.

It was obvious to our observer, however, that everything taken as self-evident in the laboratory was likely to have been the subject of some dispute in earlier papers.

Q: Could you describe the types of facts?

Statements corresponding to a taken-for- granted fact were denoted type 5 statements. Precisely because they were taken for granted, our observer found that such statements rarely featured in discussions between laboratory members. The greater the ignorance of a newcomer, the deeper the informant was required to delve into layers of implicit knowledge, and the farther into the past. Beyond a certain point, persistent questioning by the newcomer about “things that everybody knew” was regarded as socially inept.

More commonly, type 4 statements formed part of the accepted knowledge disseminated through teaching texts. It is, by contrast with type 5 statements, made explicit. This type of statement is often taken as the prototype of scientific assertion.

Many type 3 statements were found in review discussions and are of the form, “A has a certain relationship with B.” By deleting modalities from type 3 statements it is possible to obtain type 4 statements. For instance, “Oxytocin is generally assumed to be produced by the neurosecretory cells of the paraventricular nuclei.”

Type 2 statements could be identified as containing modalities which draw attention to the generality of available evidence (or the lack of it). For example: “There is a large body of evidence to support the concept of a control of the pituitary by the brain.”

Type 1 statements comprise conjectures or speculations (about a relationship) which appear most commonly at the end of papers.

It would follow that changes in statement type would correspond to changes in fact-like status. For example, the deletion of modalities in a type 3 statement would leave a type 4 statement, whose facticity would be correspondingly enhanced.

Q: Okay, let’s take this metaphor seriously. If the purpose of a laboratory is to construct papers for the purpose of persuasion – why does a scientist do this?

It is true that a good deal of laboratory conversations included mention of the term credit. The observers’ notebooks reveal the almost daily reference to the distribution of credit. It was a commodity which could be exchanged. The beginning of a scientist’s career entails a series of decisions by which individuals gradually accumulate a stock of credentials. These credentials correspond to the evaluation by others of possible future investments in that scientist. The investments have an enormous payoff both because of a concentration of credit in the institute and because of a high demand for credible information in the field. In terms of his pursuit of reward, his career makes little sense; as an investor of credibility it has been very successful.

For example, a successful investment might mean that people phone him, his abstracts are accepted, others show interest in his work, he is believed more easily and listened to with greater attention, he is offered better positions, his assays work well, data flow more reliably and form a more credible picture. The objective of market activity is to extend and speed up the credibility cycle as a whole. Those unfamiliar with daily scientific activity will find this portrayal of scientific activity strange unless they realise that only rarely is information itself “bought.” Rather, the object of “purchase” is the scientist’s ability to produce some sort of information in the future. The relationship between scientists is more like that between small corporations than that between a grocer and his customer.

Another key feature of the hierarchy is the extent to which people are regarded as replaceable. When, for example, a participant talks about leaving the group, he often expresses concern about the fate of the antiseras, fractions, and samples for which he has been responsible. It is these, together with the papers he has produced, that represent the riches needed by a participant to enable him to settle elsewhere and write further papers. Since the value of information is thought to depend on its originality, the higher a participant in the hierarchy the less replaceable he is thought to be.

Q: Credit is important because it means that the science is deemed more credible. Talk about how this affects the perception of the science.

For instance, the standing of one scientist might be such that when he defines a problem as important, no one feels able to object that it is a trivial question; consequently, the field may be moulded around this important question, and funds will be readily forthcoming. One review specified fourteen criteria which had to be met before the existence of a new releasing factor could be accepted. These criteria were so stringent that only a few signals could be distinguished from the background noise. This, in turn, meant that most previous literature had to be dismissed. By increasing the material and intellectual requirements, the number of competitors was reduced. 

Whether or not the number and quality of inscriptions constituted a proof depended on negotiations between members. Let’s say that Wilson wants to know the basis for the claim that the peptides have no activity when injected intravenously, so that they can counter any possible objections to their argument. At first sight, a Popperian might be delighted by Flower’s response. It is clear, however, that the question does not simply hinge on the presence or absence of evidence. Rather Flower’s comment shows that it depends on what they choose to accept as negative evidence. For him, the issue is a practical question. This example demonstrates that the logic of deduction cannot be isolated from its sociological grounds.

Similarly, a colleague’s claim was dismissed by showing an almost perfect fit between CRF, an important and long sought-after releasing factor, and a piece of haemoglobin, a relatively trivial protein. The dismissal effect is heightened by the creation of a link between his recent claim and the well-known blunder which the same colleague had committed a few years earlier

They appear to have developed considerable skills in setting up devices which can pin down elusive figures, traces, or inscriptions in their craftwork, and in the art of persuasion. The latter skill enables them to convince others that what they do is important, that what they say is true, and that their proposals are worth funding. They are so skillful, indeed, that they manage to convince others not that they are being convinced but that they are simply following a consistent line of interpretation of available evidence.

Q: If you could summarize everything, how would you do it?

Our argument has one central feature: the set of statements considered too costly to modify constitute what is referred to as reality.

The result of the construction of a fact is that it appears unconstructed by anyone; the result of rhetorical persuasion in the agnostic field is that participants are convinced that they have not been convinced; the result of materialisation is that people can swear that material considerations are only minor components of the “thought process”; the result of the investments of credibility, is that participants can claim that economics and beliefs are in no way related to the solidity of science; as to the circumstances, they simply vanish from accounts, being better left to political analysis than to an appreciation of the hard and solid world of facts!

By being sufficiently convincing, people will stop raising objections altogether, and the statement will move toward a fact-like status. Instead of being a figment of one’s imagination (subjective), it will become a “real objective thing,” the existence of which is beyond doubt.