The answers to the Edge annual question is up: what do you think about machines that think?
Here were some of my favorite answers:
George Church, Carlo Rovelli, Donald D. Hoffman, Melanie Swan, Scott Atran, Richard Thaler, Terrence J. Sejnowski, Alex (Sandy) Pentland, Ursula Martin (and also the winner of most lyrical answer), Sheizaf Rafaeli, David Christian, George Dyson, Douglas Rushkoff, Helen Fisher, Tomaso Poggio, Bruce Schneier
Here are answers I was not fond of (you’ll sense a theme here, and yes I am obnoxiously opinionated about this particular subject):
I was surprised at how few women there were, so I made this chart of the respondents since 2010 (a rough count suggests that there were 16, 21, 24, 26, 34, and 13 from 2010-2015). It seems, uh, less than optimal.
Anywho, here is a quick blurb about how I would have answered:
What do I think about machines that think?
I think that we can not understand what machines that think will be like. Look out at the living world and ask yourself how well you understand the motivations of the animals that reside in it. Sure, you can guess at the things most related to us – cats, dogs, mice – but even these can be obtuse in their thought patterns. But, as Nagel asked us, consider the bat: how well can you place yourself in the mind of a creature sees with its ears? Or ‘thinks’ in smells instead of sights?
It gets even harder when we consider animals further out. What do I think of how ants think? Of sea squirts that live and then eat their own brains, converting themselves into pseudo-plants? How do I comprehend their lives and their place in the world compared to ours?
In truth, humans have largely left the natural world that required us to interact with other animals. During the European Middle Ages, the agency of other animals was so implicit in their view of the world that trials would be held with lawyers defending the interests of sheep and chickens. Yes, chickens would be accused of murder and sheep of enticing men into lascivious acts. Now these moral agents are little more than machines – which says a lot about how we think of machines.
So machines that think? How will they think and what will it be like to live in an ecosystem with non-human moral agents again? I cannot answer the second question – although it is probably the more interesting one – but look at where machine intelligence is heading now. We already have teams of intelligences ferociously battling at the microsecond level to trade stocks, a kind of frothy match of wits lying beneath the surface of the stock market. We have roombas, wandering around our homes, content to get trapped behind your couch and clogged full of cat hair. We have vicious killers in virtual environments, killing your friends and enemies in Call of Duty and Starcraft.
How many of these machines will be distributed intelligences, how many overly-specialized, and how many general purpose? This is the question that will determine how I think about machines that think.