RIP Marvin Minsky, 1927-2016

Marvin Minsky in Detroit

I awoke to sad news this morning – Marvin Minsky passed away at the age of 88. Minsky’s was the first serious work on artificial intelligence that I ever read and one of the reasons I am in neuroscience today.

Minsky is infamously known for his book Perceptrons, which most famously showed that the neural networks at the time had problems with computations such as XOR (here is the solution, which every neuroscientist should know!).

Minsky is also known for the Dartmouth Summer Research Conference, whose proposal is really worth reading in full.

Fortunately, Minsky put many of his writings online which I have been rereading this morning. You could read his thoughts on communicating with Alien Intelligence:

All problem-solvers, intelligent or not, are subject to the same ultimate constraints–limitations on space, time, and materials. In order for animals to evolve powerful ways to deal with such constraints, they must have ways to represent the situations they face, and they must have processes for manipulating those representations.

ECONOMICS: Every intelligence must develop symbol-systems for representing objects, causes and goals, and for formulating and remembering the procedures it develops for achieving those goals.

SPARSENESS: Every evolving intelligence will eventually encounter certain very special ideas–e.g., about arithmetic, causal reasoning, and economics–because these particular ideas are very much simpler than other ideas with similar uses.

He also mentions this, which sounds fascinating. I was not aware of this but cannot find the actual paper. If anyone can send me the citation, please leave a comment!

A TECHNICAL EXPERIMENT. I once set out to explore the behaviors of all possible processes–that is, of all possible computers and their programs. There is an easy way to do that: one just writes down, one by one, all finite sets of rules in the form which Alan Turing described in 1936. Today, these are called “Turing machines.” Naturally, I didn’t get very far, because the variety of such processes grows exponentially with the number of rules in each set. What I found, with the help of my student Daniel Bobrow, was that the first few thousand such machines showed just a few distinct kinds of behaviors. Some of them just stopped. Many just erased their input data. Most quickly got trapped in circles, repeating the same steps over again. And every one of the remaining few that did anything interesting at all did the same thing. Each of them performed the same sort of “counting” operation: to increase by one the length of a string of symbols–and to keep repeating that. In honor of their ability to do what resembles a fragment of simple arithmetic, let’s call these them “A-Machines.” Such a search will expose some sort of “universe of structures” that grows and grows. For our combinations of Turing machine rules, that universe seems to look something like this:

minsky turing machines

In Why Most People Think Computers Can’t, he gets off a couple of cracks at people who think computers can’t do anything humans can:

Most people assume that computers can’t be conscious, or self-aware; at best they can only simulate the appearance of this. Of course, this
assumes that we, as humans, are self-aware. But are we? I think not. I
know that sounds ridiculous, so let me explain.

If by awareness we mean knowing what is in our minds, then, as every  clinical psychologist knows, people are only very slightly self-aware, and  most of what they think about themselves is guess-work. We seem to build  up networks of theories about what is in our minds, and we mistake these  apparent visions for what’s really going on. To put it bluntly, most of  what our “consciousness” reveals to us is just “made up”. Now, I don’t  mean that we’re not aware of sounds and sights, or even of some parts of  thoughts. I’m only saying that we’re not aware of much of what goes on inside our minds.

Finally, he has some things to say on Symbolic vs Connectionist AI:

Thus, the present-day systems of both types show serious limitations. The top-down systems are handicapped by inflexible mechanisms for retrieving knowledge and reasoning about it, while the bottom-up systems are crippled by inflexible architectures and organizational schemes. Neither type of system has been developed so as to be able to exploit multiple, diverse varieties of knowledge.

Which approach is best to pursue? That is simply a wrong question. Each has virtues and deficiencies, and we need integrated systems that can exploit the advantages of both. In favor of the top-down side, research in Artificial Intelligence has told us a little—but only a little—about how to solve problems by using methods that resemble reasoning. If we understood more about this, perhaps we could more easily work down toward finding out how brain cells do such things. In favor of the bottom-up approach, the brain sciences have told us something—but again, only a little—about the workings of brain cells and their connections.

Apparently, he viewed the symbolic/connectionist split like so:

minsky connectionist vs symbolic

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s