The most destructive analogy in the last 100 years, says @DavidGelernter with @econtalker : “Post-Turing thinkers decided that brains were organic computers, that computation was a perfect model of what minds do, that minds can be built out of software, and that mind relates to brain as software relates to computer”. Interview states position that consciousness won’t be found in a computer.
The cited source is visible on a page on Google Books:
In his famous 1950 paper about artificial intelligence, Alan Turing mentions consciousness, in passing, as a phenomenon associated with minds, in some ways mysterious. But he treats it as irrelevant. If you define the purpose of mind as rational thought, then consciousness certainly seems irrelevant. And for Turing, rational thought was indeed the purpose of mind.
Turing’s favorite word in this connection is “intelligence”: he saw the goal of technology not as an artificial mind (with all its unnecessary emotions, reminiscences, fascinating sensations, and upsetting nightmares), but as artificial intelligence, which is why the field has the name it does.
In no sense did this focus reflect narrowness or lack of imagination on Turing’s part. Few more imaginative men have ever lived. But he needed digital computers for practical purposes. Post-Turing thinkers decided that brains were organic computers, that computation was a perfect model of what minds do, that minds can be built out of software, and that mind relates to brain as software relates to computer—the most important, most influential and (intellectually) most destructive analogy in the last hundred years (the last hundred at least). [emphasis added]
Turing writes in his 1950 paper that, with time and thought, one might well be able to build a digital computer that could “enjoy” strawberries and cream. But, he adds, don’t hold your breadth. Such a project would be “idiotic’—so why should science bother? In practical terms, he has a point.
To understand the mind, we must go over the ground beyond logic as carefully as we study logic and reasoning. That’s not to say that rational thought does not underlie man’s greatest intellectual achievements. Cynthia Ozick reminds us, furthermore, of a rational person’s surprise at “how feeling could be so improbably distant from knowing” (Foreign Bodies). It’s much easier to feel something is right than to prove it. And when you do try to prove it, you might easily discover that despite your perfectly decided, rock-solid feeling of certainty, your feelings are total nonsense.
We have taken this particular walk, from the front door to the far end of Rationality Park, every day for the last two thousand years. Why not go a little farther this time, and venture beyond the merely rational?David Gelernter, The Tides of Mind (2016), Chapter 5
The idea is further explored in the interview.
42:44 Russ Roberts: [….] So, you are a skeptic about the ability of artificial intelligence to eventually mimic or emulate a brain. So, talk about why. And then why you feel that that analogy is so destructive: because it is extremely popular and accepted by many, many people. Not by me, but by many people, smarter than I am, actually. So, what’s wrong with that analogy, and why is it destructive?
David Gelernter: Well, I think you have to be careful in saying what exactly the analogy is.
On the one hand, I think AI (Artificial Intelligence) has enormous potential in terms of imitating or faking it, when it comes to intelligence. I think we’ll be able to build software that certainly gives you the impression of solving problems in a human-like or in an intelligent way. I think there’s a tremendous amount to be done that we haven’t done yet.
On the other hand, if by emulating the mind you mean achieving consciousness–having feelings, awareness–I think as a matter of fact that computers will never achieve that.
Any program, any software that you deal with, any robot that you deal with will always be a zombie in the sense that–in the Hollywood and philosophers’ sense of zombie–zombie a very powerful word in philosophy. In the sense that it’s behavior might be very impressive–I mean, you might give it a typical mathematics problem to solve or read it something from a newspaper and ask it to comment or give it all sorts of tests you think of, and it might pass with flying colors. You might walk away saying, ‘This guy is smarter than my best friend,’ and, you know, ‘I look forward to chatting with him again.’ But when you open up the robot’s head, there’s nothing in there. There’s nothing inside. There’s no consciousness.
“David Gelernter on Consciousness, Computers, and the Tides of Mind” | Russ Roberts | Nov. 7, 2016 | Econtalk at http://www.econtalk.org/david-gelernter-on-consciousness-computers-and-the-tides-of-mind , MP3 audio downloadable at http://files.libertyfund.org/econtalk/y2016/Gelernterconsciousness.mp3