Anki Drive and the coming rise of intelligent toys

So there is this new toy called the Anki Drive: basically old-fashioned scalextric (racing cars on a track) combined with an iPhone. It doesn’t sound that exciting at first – just lets you add little things like “shooting missiles” at the other cars to make it a bit more video game-esque.

But the real exciting thing? The iPhone also controls the other cars autonomously, giving each an aggressive, cooperative, etc personality.

Yes: this car gives children the gift of multiple competitive artificial intelligences with unique personalities, as if that is no big deal. Next up: drone friends? Commercialized artificial intelligence in, well,  everything is on its way.

Merry Christmas.

What has neuroscience done for machine intelligence? (Updated)

Today on the twitters, Michael Hendricks asked, “Why do AI people bother with how animal brains work? Most good inventions work by doing things totally unlike how an animal would.”

The short answer is that animal brains can already solve the problems that AI researchers want to solve; so why not look into how they are accomplishing it?

The long answer is that in the end, the algorithms that we ultimately use may end up being dramatically different – but we need a starting point somewhere. By looking at some of the algorithms that have a neural inspiration, it is clear that by thinking about ideas of how the nervous system works machine learning/AI researchers can come up with clear solutions to their problems:

  1. Neural networks. In the 1940s and 50s, McCulloch, Pitts, and Hebb all contributed to modeling how a nervous system might work. In some sense, neural nets are trapped in this 1940s view of the nervous system; but why not? At an abstract level, it’s close…ish.
  2. Deep learning. Currently the Hot Shit in machine learning, these are like “neural networks 2.0”. Some quick history: traditionally, neural networks were done one layer at a time, with strict feedforward connectivity. One form of recurrent neural network proposed by Hopfield can be used to memorize patterns, or create ‘memories’. A variant on this, proposed by (computational neuroscientist) Terry Sejnowski and Geoff Hinton is the Boltzmann machine. If you combine multiple layers of Boltzmann machines with ideas from biological development, you get Deep Learning (and you publish it in the journal Neural Computation!).
  3. Independent Component Analysis. Although this story is possibly apocryphal, one of the earliest algorithms for computing ICA was developed – by Tony Bell and Terry Sejnowski (again) – by thinking about how neurons maximize their information about the physical world.
  4. Temporal difference learning. To quote from the Scholarpedia page: “This line of research work began with the exploration of Klopf’s 1972 idea of generalized reinforcement which emphasized the importance of sequentiality in a neuronal model of learning”

Additionally, companies like Qualcomm and the Brain Corporation are attempting to use ideas from spiking neural networks to make much more energy efficient devices.

In the other direction, neuroscientists can find that the brain appears to be implementing already-known ML algorithms (see this post on Nicole Rust). Many ideas and many biological specifics will be useless – but research is the hope of finding the tiny fraction of an idea that is useful to a new problem.

Updated:

Over on reddit, downtownslim offers two more examples:

Neocognitron was the foundation for the ConvNet. Fukushima came up with the model, LeCun figured out how to train it.

Support Vector Machines This last one is quite interesting, not many people outside the neural computation community know that Support Vector machines were influenced by the neural network community. They were originally called Support Vector Networks.

Monday thought/open question: What is the goal of the nervous system? (Updated)

In systems neuroscience, we like to say that the goal of the visual system is to extract as much information about the world as possible. We start in the retina with points of light; those points are correlated (look around you: the color of one part of the visual world is often very similar to the color right next to it). So then the next set of neurons represent sudden changes in the brightness (ON/OFF neurons) to decorrelate. In the first stage of visual cortex, we find neurons that respond to edges – areas where you could put a several ON/OFF receptive fields in a row (see above). The responses of the visual neurons get successively more unrelated to each other as you go deeper into cortex – they start representing more abstract shapes and then, say, individual faces. But our guiding principle through this all is that the visual neurons are trying to present as much information about the visual world as possible.

But now let’s look at the nervous system from a broader view. What is it trying to accomplish? If we were economists, we might say that the nervous system is trying to maximize the ‘utility’ of the animal; an ecologist might say that it is trying to maximize the reproductive success of an animal (or: of an animal’s offspring, or its genes).

Is this a reasonable view of the ‘goal’ of the nervous system? If so, where do the goals of the input and the output meet? When do neurons in the visual system of the animal begin representing value, or utility, at some level? Is there some principle from computer science that has something to say about value and sensory representation?

Update: There was a lot of discussion on twitter, which I have partially summarized here.

Why does Gary Marcus hate computational neuroscience?

OK, this story on the BRAIN Initiative in the New Yorker is pretty weird:

To progress, we need to learn how to combine the insights of molecular biochemistry…with the study of computation and cognition… (Though some dream of eliminating psychology from the discussion altogether, no neuroscientist has ever shown that we can understand the mind without psychology and cognitive science.)

Who, exactly, has suggested eliminating psychology from the study of neuroscience? Anyone? And then there’s this misleading paragraph:

The most important goal, in my view, is buried in the middle of the list at No. 5, which seeks to link human behavior with the activity of neurons. This is more daunting than it seems: scientists have yet to even figure out how the relatively simple, three-hundred-and-two-neuron circuitry of the C. Elegans worm works, in part because there are so many possible interactions that can take place between sets of neurons. A human brain, by contrast, contains approximately eighty-six billion neurons.

As a C. elegans researcher, I have to say: it’s true there’s a lot we don’t know about worm behavior! There’s also not quite as many worm behavioralists as there are, say, human behavioralists. But there is a lot that we do know. We know full circuits for several behaviors, and with the tools that we have now that numbers going to explode over the next few years.

But then we learn that, whatever else, Gary Marcus really doesn’t like the work that computational neuroscientists have done to advance their tools and models:

Perhaps the least compelling aspect of the report is one of its justifications for why we should invest in neuroscience in the first place: “The BRAIN Initiative is likely to have practical economic benefits in the areas of artificial intelligence and ‘smart’ machines.” This seems unrealistic in the short- and perhaps even medium-term: we still know too little about the brain’s logical processes to mine them for intelligent machines. At least for now, advances in artificial intelligence tend to come from computer science (driven by its longstanding interest in practical tools for efficient information processing), and occasionally from psychology and linguistics (for their insights into the dynamics of thought and language).

Interestingly, he gives his own field, psychology and linguistics, a pass for how much more they’ve done.  So besides, obviously, the study of neural networks, let’s think about what other aspects of AI have been influenced by neuroscience. I’d count deep learning as a bit separate and clearly Google’s pretty excited about that. Algorithms for ICA, a dimensionality reduction method used in machine learning, were influenced by ideas about how the brain uses information (Tony Bell). The role of dopamine and serotonin have contributed to reinforcement learning. Those are just the first things that I can think of off the top of my head (interestingly, almost all of this sprouted out of the lab of Terry Sejnowski.) There have been strong efforts on dimensionality reduction – an important component of machine learning – from many, many labs in computational neuroscience. These all seem important to me; what, exactly, does Gary Marcus want? He doubles down on it in the last paragraph:

There are plenty of reasons to invest in basic neuroscience, even if it takes decades for the field to produce significant advances in artificial intelligence.

What’s up with that? There are even whole companies whose sole purpose is to design better algorithms based on principles from spiking networks. Based on his previous output, he seems dismissive of modern AI (such as deep learning). Artificial intelligence is no longer the symbolism we used to think it was: it’s powerful statistical techniques. We don’t live in the time of Chomskian AI anymore! It’s the era of Norvig. And the modern AI focuses on statistical principles which are highly influenced by ideas neuroscience.